e04kdf is a comprehensive modified Newton algorithm for finding:
–an unconstrained minimum of a function of several variables;
–a minimum of a function of several variables subject to fixed upper and/or lower bounds on the variables.
First derivatives are required. The routine is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
The routine may be called by the names e04kdf or nagf_opt_bounds_mod_deriv_comp.
3Description
e04kdf is applicable to problems of the form:
$$\mathrm{Minimize}F({x}_{1},{x}_{2},\dots ,{x}_{n})\text{\hspace{1em} subject to \hspace{1em}}{l}_{j}\le {x}_{j}\le {u}_{j}\text{, \hspace{1em}}j=1,2,\dots ,n\text{.}$$
Special provision is made for unconstrained minimization (i.e., problems which actually have no bounds on the ${x}_{j}$), problems which have only non-negativity bounds, and problems in which ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$. It is possible to specify that a particular ${x}_{j}$ should be held constant. You must supply a starting point, and a funct to calculate the value of $F\left(x\right)$ and its first derivatives $\frac{\partial F}{\partial {x}_{j}}$ at any point $x$.
A typical iteration starts at the current point $x$ where ${n}_{z}$ (say) variables are free from their bounds. The vector ${g}_{z}$, whose elements are the derivatives of $F\left(x\right)$ with respect to the free variables, is known. The matrix of second derivatives with respect to the free variables, $H$, is estimated by finite differences. (Note that ${g}_{z}$ and $H$ are both of dimension ${n}_{z}$.) The equations
$$(H+E){p}_{z}=-{g}_{z}$$
are solved to give a search direction ${p}_{z}$. (The matrix $E$ is chosen so that $H+E$ is positive definite.)
${p}_{z}$ is then expanded to an $n$-vector $p$ by the insertion of appropriate zero elements, $\alpha $ is found such that $F(x+\alpha p)$ is approximately a minimum (subject to the fixed bounds) with respect to $\alpha $; and $x$ is replaced by $x+\alpha p$. (If a saddle point is found, a special search is carried out so as to move away from the saddle point.) If any variable actually reaches a bound, it is fixed and ${n}_{z}$ is reduced for the next iteration.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all the active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., ${n}_{z}$ is increased). Otherwise minimization continues in the current subspace until the stronger convergence criteria are satisfied. If at this point there are no negative or near-zero Lagrange multiplier estimates, the process is terminated.
If you specify that the problem is unconstrained, e04kdf sets the ${l}_{j}$ to $-{10}^{6}$ and the ${u}_{j}$ to ${10}^{6}$. Thus, provided that the problem has been sensibly scaled, no bounds will be encountered during the minimization process and e04kdf will act as an unconstrained minimization algorithm.
4References
Gill P E and Murray W (1973) Safeguarded steplength algorithms for optimization using descent methods NPL Report NAC 37 National Physical Laboratory
Gill P E and Murray W (1974) Newton-type methods for unconstrained and linearly constrained optimization Math. Programming7 311–350
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory
5Arguments
1: $\mathbf{n}$ – IntegerInput
On entry: the number $n$ of independent variables.
Constraint:
${\mathbf{n}}\ge 1$.
2: $\mathbf{funct}$ – Subroutine, supplied by the user.External Procedure
funct must evaluate the function $F\left(x\right)$ and its first derivatives $\frac{\partial F}{\partial {x}_{j}}$ at a specified point. (However, if you do not wish to calculate $F$ or its first derivatives at a particular $x$, there is the option of setting an argument to cause e04kdf to terminate immediately.)
On entry: will have been set to $1$ or $2$. The value $1$ indicates that only the first derivatives of $F$ need be supplied, and the value $2$ indicates that both $F$ itself and its first derivatives must be calculated.
On exit: if it is not possible to evaluate $F$ or its first derivatives at the point given in xc (or if it is wished to stop the calculations for any other reason) you should reset iflag to a negative number and return control to e04kdf. e04kdf will then terminate immediately, with ifail set to your setting of iflag.
2: $\mathbf{n}$ – IntegerInput
On entry: the number $n$ of variables.
3: $\mathbf{xc}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: the point $x$ at which the $\frac{\partial F}{\partial {x}_{j}}$, or $F$ and the $\frac{\partial F}{\partial {x}_{j}}$, are required.
4: $\mathbf{fc}$ – Real (Kind=nag_wp)Output
On exit: unless ${\mathbf{iflag}}=1$ on entry, funct must set fc to the value of the objective function $F$ at the current point $x$.
5: $\mathbf{gc}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: funct must set
${\mathbf{gc}}\left(\mathit{j}\right)$ to the value of the first derivative $\frac{\partial F}{\partial {x}_{\mathit{j}}}$ at the point $x$, for $\mathit{j}=1,2,\dots ,n$.
8: $\mathbf{w}\left({\mathbf{lw}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
9: $\mathbf{lw}$ – IntegerInput
funct is called with the same arguments iw, liw, w, lw as for e04kdf. They are present so that, when other library routines require the solution of a minimization subproblem, constants needed for the function evaluation can be passed through iw and w. Similarly, you could use elements $3,4,\dots ,{\mathbf{liw}}$ of iw and elements from $\mathrm{max}\phantom{\rule{0.125em}{0ex}}(8,7\times {\mathbf{n}}+{\mathbf{n}}\times ({\mathbf{n}}-1)/2)+1$ onwards of w for passing quantities to funct from the subroutine which calls e04kdf. However, because of the danger of mistakes in partitioning, it is recommended that you should pass information to funct via COMMON global variables and not use iw or w at all. In any case you must not change the first $2$ elements of iw or the first $\mathrm{max}\phantom{\rule{0.125em}{0ex}}(8,7\times {\mathbf{n}}+{\mathbf{n}}\times ({\mathbf{n}}-1)/2)$ elements of w.
funct must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04kdf is called. Arguments denoted as Input must not be changed by this procedure.
Note:funct should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04kdf. If your code inadvertently does return any NaNs or infinities, e04kdf is likely to produce unexpected results.
funct should be tested separately before being used in conjunction with e04kdf.
3: $\mathbf{monit}$ – Subroutine, supplied by the user.External Procedure
If ${\mathbf{iprint}}\ge 0$, you must supply monit which is suitable for monitoring the minimization process. monit must not change the values of any of its arguments.
If ${\mathbf{iprint}}<0$, a monit with the correct argument list must still be supplied, although it will not be called.
On entry: information about which variables are currently fixed on their bounds and which are free.
If ${\mathbf{istate}}\left(j\right)$ is negative, ${x}_{j}$ is currently:
–fixed on its upper bound if ${\mathbf{istate}}\left(j\right)=\mathrm{-1}$
–fixed on its lower bound if ${\mathbf{istate}}\left(j\right)=\mathrm{-2}$
–effectively a constant (i.e., ${l}_{j}={u}_{j}$) if ${\mathbf{istate}}\left(j\right)=\mathrm{-3}$
If ${\mathbf{istate}}\left(j\right)$ is positive, its value gives the position of ${x}_{j}$ in the sequence of free variables.
6: $\mathbf{gpjnrm}$ – Real (Kind=nag_wp)Input
On entry: the Euclidean norm of the current projected gradient vector ${g}_{z}$.
7: $\mathbf{cond}$ – Real (Kind=nag_wp)Input
On entry: the ratio of the largest to the smallest elements of the diagonal factor $D$ of the approximated projected Hessian matrix. This quantity is usually a good estimate of the condition number of the projected Hessian matrix. (If no variables are currently free, cond is set to zero.)
8: $\mathbf{posdef}$ – LogicalInput
On entry: specifies .TRUE. or .FALSE. according to whether or not the approximation to the second derivative matrix for the current subspace, $H$, is positive definite.
9: $\mathbf{niter}$ – IntegerInput
On entry: the number of iterations (as outlined in Section 3) which have been performed by e04kdf so far.
10: $\mathbf{nf}$ – IntegerInput
On entry: the number of evaluations of $F\left(x\right)$ so far, i.e., the number of calls of funct with iflag set to $2$. Each such call of funct also calculates the first derivatives of $F$. (In addition to these calls monitored by nf, funct is called with iflag set to $1$ not more than n times per iteration.)
13: $\mathbf{w}\left({\mathbf{lw}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
14: $\mathbf{lw}$ – IntegerInput
As in funct, these arguments correspond to the arguments iw, liw, w, lw of e04kdf. They are included in monit's argument list primarily for when e04kdf is called by other library routines.
monit must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04kdf is called. Arguments denoted as Input must not be changed by this procedure.
Note: you should normally print fc, gpjnrm and cond to be able to compare the quantities mentioned in Section 7. It is usually helpful to examine xc, posdef and nf too.
4: $\mathbf{iprint}$ – IntegerInput
On entry: the frequency with which monit is to be called.
${\mathbf{iprint}}>0$
monit is called once every iprint iterations and just before exit from e04kdf.
iprint should normally be set to a small positive number.
Suggested value:
${\mathbf{iprint}}=1$.
5: $\mathbf{maxcal}$ – IntegerInput
On entry: the maximum permitted number of evaluations of $F\left(x\right)$, i.e., the maximum permitted number of calls of funct with iflag set to $2$. It should be borne in mind that, in addition to the calls of funct which are limited directly by maxcal, there will be calls of funct (with iflag set to $1$) to evaluate only first derivatives.
On entry: every iteration of e04kdf involves a linear minimization (i.e., minimization of $F(x+\alpha p)$ with respect to $\alpha $). eta specifies how accurately these linear minimizations are to be performed. The minimum with respect to $\alpha $ will be located more accurately for small values of eta (say, $0.01$) than large values (say, $0.9$).
Although accurate linear minimizations will generally reduce the number of iterations (and hence the number of calls of funct to estimate the second derivatives), they will tend to increase the number of calls of funct needed for each linear minimization. On balance, it is usually more efficient to perform a low accuracy linear minimization when $n$ is small and a high accuracy minimization when $n$ is large.
Suggested values:
${\mathbf{eta}}=0.5$ if $1<n<10$;
${\mathbf{eta}}=0.1$ if $10\le n\le 20$;
${\mathbf{eta}}=0.01$ if $n>20$.
If ${\mathbf{n}}=1$, eta should be set to $0.0$ (also when the problem is effectively one-dimensional even though $n>1$; i.e., if for all except one of the variables the lower and upper bounds are equal).
Constraint:
$0.0\le {\mathbf{eta}}<1.0$.
7: $\mathbf{xtol}$ – Real (Kind=nag_wp)Input
On entry: the accuracy in $x$ to which the solution is required.
If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position before a normal exit, is such that $\Vert {x}_{\mathrm{sol}}-{x}_{\mathrm{true}}\Vert <{\mathbf{xtol}}\times (1.0+\Vert {x}_{\mathrm{true}}\Vert )$ where $\Vert y\Vert =\sqrt{{\displaystyle \sum _{j=1}^{n}}{y}_{j}^{2}}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than $1.0$ in modulus, and if xtol is set to ${10}^{\mathrm{-5}}$, then ${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see Section 7.)
If the problem is scaled as described in Section 9.2 and $\epsilon $ is the machine precision, then $\sqrt{\epsilon}$ is probably the smallest reasonable choice for xtol. This is because, normally, to machine accuracy, $F(x+\sqrt{\epsilon}{e}_{j})=F\left(x\right)$, for any $j$ where ${e}_{j}$ is the $j$th column of the identity matrix. If you set xtol to $0.0$ (or any positive value less than $\epsilon $), e04kdf will use $10.0\times \sqrt{\epsilon}$ instead of xtol.
Suggested value:
${\mathbf{xtol}}=0.0$.
Constraint:
${\mathbf{xtol}}\ge 0.0$.
8: $\mathbf{delta}$ – Real (Kind=nag_wp)Input
On entry: the differencing interval to be used for approximating the second derivatives of $F\left(x\right)$. Thus, for the finite difference approximations, the first derivatives of $F\left(x\right)$ are evaluated at points which are delta apart. If $\epsilon $ is the machine precision, $\sqrt{\epsilon}$ will usually be a suitable setting for delta. If you set delta to $0.0$ (or to any positive value less than $\epsilon $), e04kdf will automatically use $\sqrt{\epsilon}$ as the differencing interval.
Suggested value:
${\mathbf{delta}}=0.0$.
Constraint:
${\mathbf{delta}}\ge 0.0$.
9: $\mathbf{stepmx}$ – Real (Kind=nag_wp)Input
On entry: an estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency a slight overestimate is preferable.)
where $k$ is the iteration number. Thus, if the problem has more than one solution, e04kdf is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can also help to avoid possible overflow in the evaluation of $F\left(x\right)$. However, an underestimate of stepmx can lead to inefficiency.
On entry: indicates whether the problem is unconstrained or bounded. If there are bounds on the variables, ibound can be used to indicate whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values:
${\mathbf{ibound}}=0$
If the variables are bounded and you are supplying all the ${l}_{j}$ and ${u}_{j}$ individually.
${\mathbf{ibound}}=1$
If the problem is unconstrained.
${\mathbf{ibound}}=2$
If the variables are bounded, but all the bounds are of the form $0\le {x}_{j}$.
${\mathbf{ibound}}=3$
If all the variables are bounded, and ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$.
${\mathbf{ibound}}=4$
If the problem is unconstrained. (The ${\mathbf{ibound}}=4$ option is provided for consistency with other routines. In e04kdf it produces the same effect as ${\mathbf{ibound}}=1\text{.}$)
Constraint:
$0\le {\mathbf{ibound}}\le 4$.
11: $\mathbf{bl}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output
On entry: the fixed lower bounds ${l}_{j}$.
If ibound is set to $0$, you must set
${\mathbf{bl}}\left(\mathit{j}\right)$ to ${l}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. (If a lower bound is not specified for any ${x}_{j}$, the corresponding ${\mathbf{bl}}\left(j\right)$ should be set to a large negative number, e.g., $-{10}^{6}$.)
If ibound is set to $3$, you must set ${\mathbf{bl}}\left(1\right)$ to ${l}_{1}$; e04kdf will then set the remaining elements of bl equal to ${\mathbf{bl}}\left(1\right)$.
If ibound is set to $1$, $2$ or $4$, bl will be initialized by e04kdf.
On exit: the lower bounds actually used by e04kdf, e.g., if ${\mathbf{ibound}}=2$, ${\mathbf{bl}}\left(1\right)={\mathbf{bl}}\left(2\right)=\cdots ={\mathbf{bl}}\left(n\right)=0.0$.
12: $\mathbf{bu}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output
On entry: the fixed upper bounds ${u}_{j}$.
If ibound is set to $0$, you must set
${\mathbf{bu}}\left(\mathit{j}\right)$ to ${u}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. (If an upper bound is not specified for any variable, the corresponding ${\mathbf{bu}}\left(j\right)$ should be set to a large positive number, e.g., ${10}^{6}$.)
If ibound is set to $3$, you must set ${\mathbf{bu}}\left(1\right)$ to ${u}_{1}$; e04kdf will then set the remaining elements of bu equal to ${\mathbf{bu}}\left(1\right)$.
If ibound is set to $1$, $2$ or $4$, bu will be initialized by e04kdf.
On exit: the upper bounds actually used by e04kdf, e.g., if ${\mathbf{ibound}}=2$, ${\mathbf{bu}}\left(1\right)={\mathbf{bu}}\left(2\right)=\cdots ={\mathbf{bu}}\left(n\right)={10}^{6}$.
13: $\mathbf{x}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput/Output
On entry: ${\mathbf{x}}\left(\mathit{j}\right)$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
On exit: the final point ${x}^{\left(k\right)}$. Thus, if ${\mathbf{ifail}}={\mathbf{0}}$ on exit, ${\mathbf{x}}\left(j\right)$ is the $j$th component of the estimated position of the minimum.
14: $\mathbf{hesl}\left({\mathbf{lh}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: during the determination of a direction ${p}_{z}$ (see Section 3), $H+E$ is decomposed into the product $LD{L}^{\mathrm{T}}$, where $L$ is a unit lower triangular matrix and $D$ is a diagonal matrix. (The matrices $H$, $E$, $L$ and $D$ are all of dimension ${n}_{z}$, where ${n}_{z}$ is the number of variables free from their bounds. $H$ consists of those rows and columns of the full estimated second derivative matrix which relate to free variables. $E$ is chosen so that $H+E$ is positive definite.)
hesl and hesd are used to store the factors $L$ and $D$. The elements of the strict lower triangle of $L$ are stored row by row in the first ${n}_{z}({n}_{z}-1)/2$ positions of hesl. The diagonal elements of $D$ are stored in the first ${n}_{z}$ positions of hesd. In the last factorization before a normal exit, the matrix $E$ will be zero, so that hesl and hesd will contain, on exit, the factors of the final estimated second derivative matrix $H$. The elements of hesd are useful for deciding whether to accept the results produced by e04kdf (see Section 7).
15: $\mathbf{lh}$ – IntegerInput
On entry: the dimension of the array hesl as declared in the (sub)program from which e04kdf is called.
16: $\mathbf{hesd}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: during the determination of a direction ${p}_{z}$ (see Section 3), $H+E$ is decomposed into the product $LD{L}^{\mathrm{T}}$, where $L$ is a unit lower triangular matrix and $D$ is a diagonal matrix. (The matrices $H$, $E$, $L$ and $D$ are all of dimension ${n}_{z}$, where ${n}_{z}$ is the number of variables free from their bounds. $H$ consists of those rows and columns of the full estimated second derivative matrix which relate to free variables. $E$ is chosen so that $H+E$ is positive definite.)
hesl and hesd are used to store the factors $L$ and $D$. The elements of the strict lower triangle of $L$ are stored row by row in the first ${n}_{z}({n}_{z}-1)/2$ positions of hesl. The diagonal elements of $D$ are stored in the first ${n}_{z}$ positions of hesd. In the last factorization before a normal exit, the matrix $E$ will be zero, so that hesl and hesd will contain, on exit, the factors of the final estimated second derivative matrix $H$. The elements of hesd are useful for deciding whether to accept the results produced by e04kdf (see Section 7).
On exit: information about which variables are currently on their bounds and which are free. If ${\mathbf{istate}}\left(j\right)$ is:
–equal to $\mathrm{-1}$, ${x}_{j}$ is fixed on its upper bound;
–equal to $\mathrm{-2}$, ${x}_{j}$ is fixed on its lower bound;
–equal to $\mathrm{-3}$, ${x}_{j}$ is effectively a constant (i.e., ${l}_{j}={u}_{j}$);
–positive, ${\mathbf{istate}}\left(j\right)$ gives the position of ${x}_{j}$ in the sequence of free variables.
18: $\mathbf{f}$ – Real (Kind=nag_wp)Output
On exit: the function value at the final point given in x.
19: $\mathbf{g}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: the first derivative vector corresponding to the final point given in x. The components of g corresponding to free variables should normally be close to zero.
On entry: ifail must be set to $0$, $\mathrm{-1}$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $\mathrm{-1}$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $\mathrm{-1}$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $\mathrm{-1}$ is recommended since useful values can be provided in some output arguments even when ${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $\mathrm{-1}$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
Note: in some cases e04kdf may return useful information.
${\mathbf{ifail}}=1$
On entry, ${\mathbf{delta}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{delta}}\ge 0.0$.
On entry, ${\mathbf{eta}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: $0.0\le {\mathbf{eta}}<1.0$.
On entry, ${\mathbf{ibound}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: $0\le {\mathbf{ibound}}\le 4$.
On entry, ${\mathbf{ibound}}=0$ and ${\mathbf{bl}}\left(\mathit{j}\right)>{\mathbf{bu}}\left(\mathit{j}\right)$ for some $j$.
On entry, ${\mathbf{ibound}}=3$ and ${\mathbf{bl}}\left(1\right)>{\mathbf{bu}}\left(1\right)$.
On entry, ${\mathbf{lh}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{lh}}\ge \u27e8\mathit{\text{value}}\u27e9$.
On entry, ${\mathbf{liw}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{liw}}\ge 2$.
On entry, ${\mathbf{lw}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{lw}}\ge \u27e8\mathit{\text{value}}\u27e9$.
On entry, ${\mathbf{maxcal}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{maxcal}}\ge 1$.
On entry, ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{n}}\ge 1$.
On entry, ${\mathbf{stepmx}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{xtol}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{stepmx}}\ge {\mathbf{xtol}}$.
On entry, ${\mathbf{xtol}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{xtol}}\ge 0.0$.
If steady reductions in $F\left(x\right)$ were monitored up to the point where this exit occurred, then the exit probably occurred simply because maxcal was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.
The error may also be caused by mistakes in funct, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
${\mathbf{ifail}}=3$
The conditions for a minimum have not all been satisfied, but a lower point could not be found. See Section 7 for further information.
The error may also be caused by mistakes in funct, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
${\mathbf{ifail}}=5$
No further progress can be made.
All the Lagrange multiplier estimates which are not indisputably positive lie relatively close to zero, but it is impossible either to continue minimizing on the current subspace or to find a feasible lower point by releasing and perturbing any of the fixed variables. You should investigate as for ${\mathbf{ifail}}={\mathbf{3}}$.
The error may also be caused by mistakes in funct, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
${\mathbf{ifail}}<0$
User requested termination by setting iflag negative in funct.
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please
contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
A successful exit (${\mathbf{ifail}}={\mathbf{0}}$) is made from e04kdf when ${H}^{\left(k\right)}$ is positive definite and when (B1, B2 and B3) or B4 hold, where
(Quantities with superscript $k$ are the values at the $k$th iteration of the quantities mentioned in Section 3, $\epsilon $ is the machine precision and $\Vert .\Vert $ denotes the Euclidean norm.)
If ${\mathbf{ifail}}={\mathbf{0}}$, then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of the position of the minimum, ${x}_{\mathrm{true}}$, to the accuracy specified by xtol.
If ${\mathbf{ifail}}={\mathbf{3}}$ or ${\mathbf{5}}$, ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but the following checks should be made. Let the largest of the first ${n}_{z}$ elements of hesd be ${\mathbf{hesd}}\left(b\right)$, let the smallest be ${\mathbf{hesd}}\left(s\right)$, and define $k={\mathbf{hesd}}\left(b\right)/{\mathbf{hesd}}\left(s\right)$. The scalar $k$ is usually a good estimate of the condition number of the projected Hessian matrix at ${x}_{\mathrm{sol}}$. If
(i)the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or fast linear rate,
(ii)${\Vert {g}_{z}\left({x}_{\mathrm{sol}}\right)\Vert}^{2}<10.0\times \epsilon $, and
then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the position of a minimum. When (ii) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$. The quantities needed for these checks are all available via monit; in particular the value of cond in the last call of monit before exit gives $k$.
Further suggestions about confirmation of a computed solution are given in the E04 Chapter Introduction.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
e04kdf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
9.1Timing
The number of iterations required depends on the number of variables, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed in an iteration of e04kdf is $\frac{{n}_{z}^{3}}{6}+\mathit{O}\left({n}_{z}^{2}\right)$. In addition, each iteration makes ${n}_{z}$ calls of funct (with iflag set to $1$) in approximating the projected Hessian matrix, and at least one other call of funct (with iflag set to $2$). So, unless $F\left(x\right)$ and its first derivatives can be evaluated very quickly, the run time will be dominated by the time spent in funct.
9.2Scaling
Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of ${x}_{j}$ are each in the range $(\mathrm{-1},+1)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04kdf will take less computer time.
9.3Unconstrained Minimization
If a problem is genuinely unconstrained and has been scaled sensibly, the following points apply:
(a)${n}_{z}$ will always be $n$,
(b)hesl and hesd will be factors of the full estimated second derivative matrix with elements stored in the natural order,
(c)the elements of $g$ should all be close to zero at the final point,
(d)the values of the ${\mathbf{istate}}\left(j\right)$ given by monit and on exit from e04kdf are unlikely to be of interest (unless they are negative, which would indicate that the modulus of one of the ${x}_{j}$ has reached ${10}^{6}$ for some reason),
(e)monit's argument gpjnrm simply gives the norm of the first derivative vector.
So the following routine (in which partitions of extended workspace arrays are used as bl, bu and istate) could be used for unconstrained problems:
Subroutine unckdf(n,funct,monit,iprint,maxcal,eta,xtol,delta, &
stepmx,x,hesl,lh,hesd,f,g,iwork,liwork,work, &
lwork,ifail)
! A ROUTINE TO APPLY E04KDF TO UNCONSTRAINED PROBLEMS.
! THE REAL ARRAY WORK MUST BE OF DIMENSION AT LEAST
! (9*N + max(1, N*(N-1)/2)). ITS FIRST 7*N + max(1, N*(N-1)/2)
! ELEMENTS WILL BE USED BY E04KDF AS THE ARRAY W. ITS LAST
! 2*N ELEMENTS WILL BE USED AS THE ARRAYS BL AND BU.
! THE INTEGER ARRAY IWORK MUST BE OF DIMENSION AT LEAST (N+2)
! ITS FIRST 2 ELEMENTS WILL BE USED BY E04KDF AS THE ARRAY IW.
! ITS LAST N ELEMENTS WILL BE USED AS THE ARRAY ISTATE.
! LIWORK AND LWORK MUST BE SET TO THE ACTUAL LENGTHS OF IWORK
! AND WORK RESPECTIVELY, AS DECLARED IN THE CALLING SEGMENT.
! OTHER PARAMETERS ARE AS FOR E04KDF.
! .. Parameters ..
Integer nout
Parameter (nout=6)
! .. Scalar Arguments ..
Real (Kind=nag_wp) delta, eta, f, stepmx, xtol
Integer ifail, iprint, lh, liwork, lwork, maxcal, n
! .. Array Arguments ..
Real (Kind=nag_wp) g(n), hesd(n), hesl(lh), work(lwork), x(n)
Integer iwork(liwork)
! .. Subroutine Arguments ..
External funct, monit
! .. Local Scalars ..
Integer ibound, j, jbl, jbu, nh
Logical toobig
! .. External Subroutines ..
External e04kdf
! .. Executable Statements ..
! CHECK THAT SUFFICIENT WORKSPACE HAS BEEN SUPPLIED
nh = n*(n-1)/2
If (nh.eq.0) nh = 1
If (lwork.lt.9*n+nh .or. liwork.lt.n+2) Then
Write (nout,fmt=99999)
Stop
End If
! JBL AND JBU SPECIFY THE PARTS OF WORK USED AS BL AND BU
jbl = 7*n + nh + 1
jbu = jbl + n
! SPECIFY THAT THE PROBLEM IS UNCONSTRAINED
ibound = 4
Call e04kdf(n,funct,monit,iprint,maxcal,eta,xtol,delta,stepmx, &
ibound,work(jbl),work(jbu),x,hesl,lh,hesd,iwork(3), &
f,g,iwork,liwork,work,lwork,ifail)
! CHECK THE PART OF IWORK WHICH WAS USED AS ISTATE IN CASE
! THE MODULUS OF SOME X(J) HAS REACHED E+6
toobig = .false.
Do 20 j = 1, n
If (iwork(2+j).lt.0) toobig = .true.
20 Continue
If ( .not. toobig) Return
Write (nout,fmt=99998)
Stop
99999 Format (' ***** INSUFFICIENT WORKSPACE HAS BEEN SUPPLIED *****')
99998 Format (' ***** A VARIABLE HAS REACHED E+6 IN MODULUS - NO UNCON', &
'STRAINED MINIMUM HAS BEEN FOUND *****')
End
starting from the initial guess $(3,\mathrm{-1},0,1)$. Before calling e04kdf, the program calls e04hcf to check the first derivatives calculated by funct.