e04 Chapter Contents
e04 Chapter Introduction
NAG Library Manual

# NAG Library Function Documentnag_opt_bounds_no_deriv (e04jbc)

## 1  Purpose

nag_opt_bounds_no_deriv (e04jbc) is a comprehensive quasi-Newton algorithm for finding:
 – an unconstrained minimum of a function of several variables; – a minimum of a function of several variables subject to fixed upper and/or lower bounds on the variables.
No derivatives are required. nag_opt_bounds_no_deriv (e04jbc) is intended for objective functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

## 2  Specification

 #include #include
void  nag_opt_bounds_no_deriv (Integer n,
 void (*objfun)(Integer n, const double x[], double *objf, double g[], Nag_Comm *comm),
Nag_BoundType bound, double bl[], double bu[], double x[], double *objf, double g[], Nag_E04_Opt *options, Nag_Comm *comm, NagError *fail)

## 3  Description

nag_opt_bounds_no_deriv (e04jbc) is applicable to problems of the form:
 $Minimize F x 1 , x 2 , … , x n subject to l j ≤ x j ≤ u j , j = 1 , 2 , … , n .$
Special provision is made for unconstrained minimization (i.e., problems which actually have no bounds on the ${x}_{j}$), problems which have only non-negativity bounds, and problems in which ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$. It is possible to specify that a particular ${x}_{j}$ should be held constant. You must supply a starting point and a function objfun to calculate the value of $F\left(x\right)$ at any point $x$.
A typical iteration starts at the current point $x$ where ${n}_{z}$ (say) variables are free from both their bounds. The vector ${g}_{z}$, whose elements are finite difference approximations to the derivatives of $F\left(x\right)$ with respect to the free variables, is known. A unit lower triangular matrix $L$ and a diagonal matrix $D$ (both of dimension ${n}_{z}$), such that ${LDL}^{\mathrm{T}}$ is a positive definite approximation to the matrix of second derivatives with respect to the free variables, are also stored. The equations
 $LDLT p z = - g z$
are solved to give a search direction ${p}_{z}$, which is expanded to an $n$-vector $p$ by the insertion of appropriate zero elements. Then $\alpha$ is found such that $F\left(x+\alpha p\right)$ is approximately a minimum (subject to the fixed bounds) with respect to $\alpha$; $x$ is replaced by $x+\alpha p$, and the matrices $L$ and $D$ are updated so as to be consistent with the change produced in the estimated gradient by the step $\alpha p$. If any variable actually reaches a bound during the search along $p$, it is fixed and ${n}_{z}$ is reduced for the next iteration. Most iterations calculate ${g}_{z}$ using forward differences, but central differences are used when they seem necessary.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange-multipliers are estimated for all the active constraints. If any Lagrange-multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange-multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., ${n}_{z}$ is increased). Otherwise minimization continues in the current subspace provided that this is practicable. When it is not, or when the stronger convergence criteria is already satisfied, then, if one or more Lagrange-multiplier estimates are close to zero, a slight perturbation is made in the values of the corresponding variables in turn until a lower function value is obtained. The normal algorithm is then resumed from the perturbed point.
If a saddle point is suspected, a local search is carried out with a view to moving away from the saddle point. In addition, nag_opt_bounds_no_deriv (e04jbc) gives you the option of specifying that a local search should be performed when a point is found which is thought to be a constrained minimum.
If you specify that the problem is unconstrained, nag_opt_bounds_no_deriv (e04jbc) sets the ${l}_{j}$ to $-{10}^{10}$ and the ${u}_{j}$ to ${10}^{10}$. Thus, provided that the problem has been sensibly scaled, no bounds will be encountered during the minimization process and nag_opt_bounds_no_deriv (e04jbc) will act as an unconstrained minimization algorithm. When the problem is unconstrained, the function values used for estimating the first derivatives will always be required in sets of $n$. nag_opt_bounds_no_deriv (e04jbc) enables you to take advantage (via the argument bound) of the fact that such sets can often be evaluated in less computer time than $n$ separate function evaluations would take in general.

## 4  References

Baker G A Jr and Graves–Morris P R (1981) Padé approximants, Part 1: Basic theory encyclopaedia of Mathematics and its Applications Addison–Wesley
Gill P E and Murray W (1972) Quasi-Newton methods for unconstrained optimization J. Inst. Math. Appl. 9 91–108
Gill P E and Murray W (1973) Safeguarded steplength algorithms for optimization using descent methods NPL Report NAC 37 National Physical Laboratory
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory

## 5  Arguments

1:     nIntegerInput
On entry: the number $n$ of independent variables.
Constraint: ${\mathbf{n}}\ge 1$.
2:     objfunfunction, supplied by the userExternal Function
objfun must evaluate the function $F\left(x\right)$ at any $x$. If nag_opt_bounds_no_deriv (e04jbc) is called with ${\mathbf{bound}}=\mathrm{Nag_NoBounds_One_Call}$, objfun must also be able to provide the set of $n$ function values used for estimating first derivatives. (However, if you do not wish to calculate $F$ at a particular $x$, there is the option of setting an argument to cause nag_opt_bounds_no_deriv (e04jbc) to terminate immediately.)
The specification of objfun is:
 void objfun (Integer n, const double x[], double *objf, double g[], Nag_Comm *comm)
1:     nIntegerInput
On entry: the number $n$ of variables.
2:     x[n]const doubleInput
On entry: the point $x$ at which the value of $F$ is required.
3:     objfdouble *Output
On exit: if $\mathbf{comm}\mathbf{\to }\mathbf{flag}=0$ on entry, then objfun must set objf to the value of the objective function $F$ at the current point given in x. If it is not possible to evaluate $F$, then objfun should assign a negative value to $\mathbf{comm}\mathbf{\to }\mathbf{flag}$; nag_opt_bounds_no_deriv (e04jbc) will then terminate.
4:     g[n]doubleInput/Output
On entry: if $\mathbf{comm}\mathbf{\to }\mathbf{flag}=3$ then g contains a set of differencing intervals.
On exit: if $\mathbf{comm}\mathbf{\to }\mathbf{flag}=3$ on entry, then objfun must reset ${\mathbf{g}}\left[\mathit{j}-1\right]$ to $F\left({x}_{c}+{\mathbf{g}}\left[\mathit{j}-1\right]×{e}_{\mathit{j}}\right)$, for $\mathit{j}=1,2,\dots ,n$, where ${x}_{c}$ is the point given in x and ${e}_{j}$ is the $j$th coordinate direction. If it is not possible to evaluate the elements of g then objfun should assign a negative value to $\mathbf{comm}\mathbf{\to }\mathbf{flag}$; nag_opt_bounds_no_deriv (e04jbc) will then terminate.
Thus, since the function values are required at $n$ points which each differ from ${x}_{c}$ only in one coordinate, it may be possible to calculate some terms once but use them in the calculation of more than one function value. (If $\mathbf{comm}\mathbf{\to }\mathbf{flag}=0$ on entry, objfun must not change the elements of g.)
5:     commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to objfun.
flagIntegerInput/Output
On entry: $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ will be set to 0 or 3. The value 0 indicates that a single function value is required. The value 3 (which will only occur if nag_opt_bounds_no_deriv (e04jbc) is called with ${\mathbf{bound}}=\mathrm{Nag_NoBounds_One_Call}$) indicates that a set of $n$ function values is required.
On exit: if objfun resets $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to some negative number then nag_opt_bounds_no_deriv (e04jbc) will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to nag_opt_bounds_no_deriv (e04jbc), ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be set to your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
firstNag_BooleanInput
On entry: will be set to Nag_TRUE on the first call to objfun and Nag_FALSE for all subsequent calls.
nfIntegerInput
On entry: the number of calculations of the objective function; this value will be equal to the number of calls made to objfun, including the current one, unless the argument ${\mathbf{bound}}=\mathrm{Nag_NoBounds_One_Call}$.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void * with a C compiler that defines void * and char * otherwise.
Before calling nag_opt_bounds_no_deriv (e04jbc) these pointers may be allocated memory and initialized with various quantities for use by objfun when called from nag_opt_bounds_no_deriv (e04jbc).
Note: objfun should be tested separately before being used in conjunction with nag_opt_bounds_no_deriv (e04jbc). The array x must not be changed by objfun.
3:     boundNag_BoundTypeInput
On entry: indicates whether the problem is unconstrained or bounded. If the problem is unconstrained, the value of bound can be used to indicate that you wish objfun to be called with $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ set to 3 when a set of $n$ function values is required for making difference estimates of derivatives. If there are bounds on the variables, bound can be used to indicate whether the facility for dealing with bounds of special forms is to be used. bound should be set to one of the following values:
${\mathbf{bound}}=\mathrm{Nag_Bounds}$
If the variables are bounded and you will be supplying all the ${l}_{j}$ and ${u}_{j}$ individually.
${\mathbf{bound}}=\mathrm{Nag_NoBounds}$
If the problem is unconstrained and you wish objfun to be called $n$ times with $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ set to 0 when a set of function values is required for making difference estimates.
${\mathbf{bound}}=\mathrm{Nag_BoundsZero}$
If the variables are bounded, but all the bounds are of the form $0\le {x}_{j}$.
${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$
If all the variables are bounded, and ${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and ${u}_{1}={u}_{2}=\cdots ={u}_{n}$.
${\mathbf{bound}}=\mathrm{Nag_NoBounds_One_Call}$
If the problem is unconstrained and you wish a single call to be made to objfun with $\mathbf{comm}\mathbf{\to }\mathbf{flag}=3$ when a set of function values are required for making difference estimates.
Constraint: ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, $\mathrm{Nag_NoBounds}$, $\mathrm{Nag_BoundsZero}$, $\mathrm{Nag_BoundsEqual}$ or $\mathrm{Nag_NoBounds_One_Call}$.
4:     bl[n]doubleInput/Output
On entry: the lower bounds ${l}_{j}$.
If ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, you must set ${\mathbf{bl}}\left[\mathit{j}-1\right]$ to ${l}_{\mathit{j}}$ , for $\mathit{j}=1,2,\dots ,n$. (If a lower bound is not required for any ${x}_{j}$, the corresponding ${\mathbf{bl}}\left[j-1\right]$ should be set to a large negative number, e.g., $-{10}^{10}$.)
If ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$, you must set ${\mathbf{bl}}\left[0\right]$ to ${l}_{1}$; nag_opt_bounds_no_deriv (e04jbc) will then set the remaining elements of bl equal to ${\mathbf{bl}}\left[0\right]$.
If ${\mathbf{bound}}=\mathrm{Nag_NoBounds}$, $\mathrm{Nag_BoundsZero}$ or $\mathrm{Nag_NoBounds_One_Call}$, bl will be initialized by nag_opt_bounds_no_deriv (e04jbc).
On exit: the lower bounds actually used by nag_opt_bounds_no_deriv (e04jbc), e.g., if ${\mathbf{bound}}=\mathrm{Nag_BoundsZero}$, ${\mathbf{bl}}\left[0\right]={\mathbf{bl}}\left[1\right]=\cdots ={\mathbf{bl}}\left[n-1\right]=0.0$.
5:     bu[n]doubleInput/Output
On entry: the upper bounds ${u}_{j}$.
If ${\mathbf{bound}}=\mathrm{Nag_Bounds}$, you must set ${\mathbf{bu}}\left[\mathit{j}-1\right]$ to ${u}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$. (If an upper bound is not required for any ${x}_{j}$, the corresponding ${\mathbf{bu}}\left[j-1\right]$ should be set to a large positive number, e.g., ${10}^{10}$.)
If ${\mathbf{bound}}=\mathrm{Nag_BoundsEqual}$, you must set ${\mathbf{bu}}\left[0\right]$ to ${u}_{1}$; nag_opt_bounds_no_deriv (e04jbc) will then set the remaining elements of bu equal to ${\mathbf{bu}}\left[0\right]$.
If ${\mathbf{bound}}=\mathrm{Nag_NoBounds}$, $\mathrm{Nag_BoundsZero}$ or $\mathrm{Nag_NoBounds_One_Call}$, bu will be initialized by nag_opt_bounds_no_deriv (e04jbc).
On exit: the upper bounds actually used by nag_opt_bounds_no_deriv (e04jbc), e.g., if ${\mathbf{bound}}=\mathrm{Nag_BoundsZero}$, ${\mathbf{bu}}\left[0\right]={\mathbf{bu}}\left[1\right]=\cdots ={\mathbf{bu}}\left[n-1\right]={10}^{10}$.
6:     x[n]doubleInput/Output
On entry: ${\mathbf{x}}\left[\mathit{j}-1\right]$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
On exit: the final point ${x}^{*}$. Thus, if ${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}$ on exit, ${\mathbf{x}}\left[j-1\right]$ is the $j$th component of the estimated position of the minimum.
7:     objfdouble *Input/Output
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ or $\mathrm{Nag_Init_H_S}$, you need not initialize objf.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$, objf must be set on entry to the value of $F\left(x\right)$ at the initial point supplied in x.
On exit: the function value at the final point given in x.
8:     g[n]doubleInput/Output
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$, g must be set on entry to an approximation to the first derivative vector at the initial $x$. This could be calculated by central differences.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ or $\mathrm{Nag_Init_H_S}$, g need not be set.
On exit: a finite difference approximation to the first derivative vector. Note that the elements of g corresponding to free variables are updated every iteration, but the elements corresponding to fixed variables are only updated when it is necessary to test the Lagrange-multiplier estimates (see Section 3). So, in the printout from nag_opt_bounds_no_deriv (e04jbc) (see Section 5 and Section 11.3) and on exit from nag_opt_bounds_no_deriv (e04jbc), the elements of g corresponding to fixed variables may be out of date. The elements of g corresponding to free variables should normally be close to zero on exit from nag_opt_bounds_no_deriv (e04jbc).
9:     optionsNag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional arguments for nag_opt_bounds_no_deriv (e04jbc). These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given below in Section 11. Some of the results returned in options can be used by nag_opt_bounds_no_deriv (e04jbc) to perform a ‘warm start’ if it is re-entered (see the member ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}$ in Section 11.2).
If any of these optional arguments are required then the structure options should be declared and initialized by a call to nag_opt_init (e04xxc) and supplied as an argument to nag_opt_bounds_no_deriv (e04jbc). However, if the optional arguments are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
10:   commNag_Comm *Input/Output
Note: comm is a NAG defined type (see Section 3.2.1.1 in the Essential Introduction).
On entry/exit: structure containing pointers for communication with user-supplied functions; see the above description of objfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to nag_opt_bounds_no_deriv (e04jbc); comm will then be declared internally for use in calls to user-supplied functions.
11:   failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).

### 5.1  Description of Printed Output

Intermediate and final results are printed out by default. The level of printed output can be controlled with the structure member ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 11.2). The default, ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$, provides a single line of output at each iteration and the final result. This section describes the default printout produced by nag_opt_bounds_no_deriv (e04jbc).
The following line of output is produced at each iteration. In all cases the values of the quantities printed are those in effect on completion of the given iteration.
 Itn the iteration count, $k$. Nfun the cumulative number of calls made to objfun. Objective the value of the objective function, $F\left({x}^{\left(k\right)}\right)$ Norm g the Euclidean norm of the projected gradient vector, $‖{g}_{z}\left({x}^{\left(k\right)}\right)‖$. Norm x the Euclidean norm of ${x}^{\left(k\right)}$. Norm(x(k-1)-x(k)) the Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$. Step the step ${\alpha }^{\left(k\right)}$ taken along the computed search direction ${p}^{\left(k\right)}$. Cond H the ratio of the largest to the smallest element of the diagonal factor $D$ of the projected Hessian matrix. This quantity is usually a good estimate of the condition number of the projected Hessian matrix. (If no variables are currently free, this value will be zero.)
The printout of the final result consists of:
 x the final point, ${x}^{*}$. g the final estimate of the projected gradient vector, ${g}_{z}\left({x}^{*}\right)$. Status the final state of the variable with respect to its bound.

## 6  Error Indicators and Warnings

When one of NE_USER_STOPNE_INT_ARG_LTNE_BOUNDNE_OPT_NOT_INITNE_BAD_PARAMNE_2_REAL_ARG_LTNE_INVALID_INT_RANGE_1NE_INVALID_REAL_RANGE_EFNE_INVALID_REAL_RANGE_FFNE_NO_MEMNE_FD_INTNE_HESD or NE_ALLOC_FAIL occurs, no values will have been assigned by nag_opt_bounds_no_deriv (e04jbc) to objf or to the elements of g, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, or ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$.
An exit of ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW_TOO_MANY_ITER}}$, NW_COND_MIN and NW_LOCAL_SEARCH may also be caused by mistakes in objfun, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
NE_2_REAL_ARG_LT
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument bound had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}$ had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ had an illegal value.
NE_BOUND
The lower bound for variable $⟨\mathit{\text{value}}⟩$ (array element ${\mathbf{bl}}\left[⟨\mathit{\text{value}}⟩\right]$) is greater than the upper bound.
NE_CANCEL_ERR
The overall relative cancellation error in the gradient estimate, $g$, or the expected search direction, $p$, is larger than 0.1. You should attempt to select another starting point.
NE_CHOLESKY_OVERFLOW
An overflow would have occurred during the updating of the Cholesky factors if the calculations had been allowed to continue. Restart from the current point with ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$.
NE_FD_INT
Finite difference interval for variable $⟨\mathit{\text{value}}⟩$ (array element ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}\left[⟨\mathit{\text{value}}⟩\right]$) is negative or so small that ${\mathbf{x}}+$ interval $\text{}={\mathbf{x}}$.
NE_HESD
The initial values of the supplied ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ has some value(s) which is negative or too small or the ratio of the largest element of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ to the smallest is too large.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INVALID_INT_RANGE_1
Value $⟨\mathit{\text{value}}⟩$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ is not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
NE_INVALID_REAL_RANGE_EF
Value $⟨\mathit{\text{value}}⟩$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ is not valid. Correct range is $\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
NE_INVALID_REAL_RANGE_FF
Value $⟨\mathit{\text{value}}⟩$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ is not valid. Correct range is $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
NE_NO_MEM
Option ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=⟨\mathit{string}⟩$ but at least one of the pointers $⟨\mathit{string}⟩$ in the option structure has not been allocated memory.
NE_NOT_APPEND_FILE
Cannot open file $⟨\mathit{string}⟩$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $⟨\mathit{string}⟩$.
NE_OPT_NOT_INIT
Options structure not initialized.
NE_USER_STOP
User requested termination, user flag value $\text{}=⟨\mathit{\text{value}}⟩$. This exit occurs if you set $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to a negative value in objfun. If fail is supplied the value of ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be the same as your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
NE_WRITE_ERROR
Error occurred when writing to file $⟨\mathit{string}⟩$.
NW_COND_MIN
The conditions for a minimum have not all been satisfied, but a lower point could not be found. Provided that, on exit, the estimated first derivatives of $F\left(x\right)$ with respect to the free variables are sufficiently small, and that the estimated condition number of the second derivative matrix is not too large, this error exit may simply mean that, although it has not been possible to satisfy the specified requirements, the algorithm has in fact found the minimum as far as the accuracy of the machine permits. This could be because ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ has been set so small that rounding the error in objfun makes attainment of the convergence conditions impossible. If the estimated condition number of the approximate Hessian matrix at the final point is large, it could be that the final point is a minimum but that the smallest eigenvalue of the second derivative matrix is so close to zero that it is not possible to recognize the point as a minimum.
The local search has failed to find a feasible point which gives a significant change of function value. If the problem is a genuinely unconstrained one, this type of exit indicates that the problem is extremely ill conditioned or that the function has no minimum. If the problem has bounds which may be close to the minimum, it may just indicate that steps in the subspace of free variables happened to meet a bound before they changed the function value.
NW_TOO_MANY_ITER
The maximum number of iterations, $⟨\mathit{\text{value}}⟩$, have been performed. If steady reductions in $F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.

## 7  Accuracy

A successful exit $\left({\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}\right)$ is made from nag_opt_bounds_no_deriv (e04jbc) when (B1, B2 and B3) or B4 hold, and the local search (if used) confirms a minimum, where
• $\mathrm{B}1\equiv {\alpha }^{\left(k\right)}×‖{p}^{\left(k\right)}‖<\left({\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}+\sqrt{\epsilon }\right)×\left(1.0+‖{x}^{\left(k\right)}‖\right)$
• $\mathrm{B}2\equiv \left|{F}^{\left(k\right)}-{F}^{\left(k-1\right)}\right|<\left({{\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}}^{2}+\epsilon \right)×\left(1.0+\left|{F}^{\left(k\right)}\right|\right)$
• $\mathrm{B}3\equiv ‖{g}_{z}^{\left(k\right)}‖<\left({\epsilon }^{1/3}+{\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}\right)×\left(1.0+\left|{F}^{\left(k\right)}\right|\right)$
• $\mathrm{B}4\equiv ‖{g}_{z}^{\left(k\right)}‖<0.01×\sqrt{\epsilon }\text{.}$
(Quantities with superscript $k$ are the values at the $k$th iteration of the quantities mentioned in Section 3; $\epsilon$ is the machine precision, $‖.‖$ denotes the Euclidean norm and ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ is described in Section 11.)
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}$, then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of the position of the minimum, ${x}_{\mathrm{true}}$, to the accuracy specified by ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW_COND_MIN}}$ or NW_LOCAL_SEARCH, ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but the following checks should be made. Let the largest of the first ${n}_{z}$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ be ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[b\right]$, let the smallest be ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[s\right]$, and define $k={\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[b\right]/{\mathbf{options}}\mathbf{.}{\mathbf{hesd}}\left[s\right]$. The scalar $k$ is usually a good estimate of the condition number of the projected Hessian matrix at ${x}_{\mathrm{sol}}$. If
 (a) the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, (b) ${‖{g}_{z}\left({x}_{\mathrm{sol}}\right)‖}^{2}<10.0×\epsilon$, and (c) $k<1.0/‖{g}_{z}\left({x}_{\mathrm{sol}}\right)‖$,
then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the position of a minimum. When (b) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$. The quantities needed for these checks are all available in the results printout from nag_opt_bounds_no_deriv (e04jbc); in particular the final value of Cond H gives $k$.
Further suggestions about confirmation of a computed solution are given in the e04 Chapter Introduction.

## 8  Parallelism and Performance

Not applicable.

### 9.1  Timing

The number of iterations required depends on the number of variables, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed in an iteration of nag_opt_bounds_no_deriv (e04jbc) is roughly proportional to ${n}_{z}^{2}$. In addition, each iteration makes at least ${n}_{z}+1$ function evaluations. So, unless $F\left(x\right)$ can be evaluated very quickly, the run time will be dominated by the time spent in objfun.

### 9.2  Scaling

Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(-1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix at the solution is well conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_bounds_no_deriv (e04jbc) will take less computer time.

### 9.3  Unconstrained Minimization

If a problem is genuinely unconstrained and has been scaled sensibly, the following points apply:
 (a) ${n}_{z}$ will always be $n$, (b) if ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ on entry, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[\mathit{j}-1\right]$ has simply to be set to $\mathit{j}$, for $\mathit{j}=1,2,\dots ,n$, (c) ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ will be factors of the full approximate second derivative matrix with elements stored in the natural order, (d) the elements of g should all be close to zero at the final point, (e) the Status values given in the printout from nag_opt_bounds_no_deriv (e04jbc) and in ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ on exit are unlikely to be of interest (unless they are negative, which would indicate that the modulus of one of the ${x}_{j}$ has reached ${10}^{10}$ for some reason), (f) Norm g simply gives the norm of the estimated first derivative vector.

## 10  Example

This example minimizes the function
 $F = x 1 + 10 x 2 2 + 5 x 3 - x 4 2 + x 2 - 2 x 3 4 + 10 x 1 - x 4 4$
subject to the bounds
 $-1≤x1≤3 -2≤x2≤0 -1≤x4≤3$
starting from the initial guess ${\left(3,-1,0,1\right)}^{\mathrm{T}}$.
The example program also shows the use of certain optional arguments. It shows option values being assigned directly within the program text and by reading values from a data file. The options structure is declared and initialized by nag_opt_init (e04xxc). Values are then assigned directly to ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ and three further options are read from the data file by use of nag_opt_read (e04xyc). The memory freeing function nag_opt_free (e04xzc) is used to free the memory assigned to the pointers in the option structure. You must not use the standard C function free() for this purpose.

### 10.1  Program Text

Program Text (e04jbce.c)

### 10.2  Program Data

Program Options (e04jbce.opt)

### 10.3  Program Results

Program Results (e04jbce.r)

## 11  Optional Arguments

A number of optional input and output arguments to nag_opt_bounds_no_deriv (e04jbc) are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional arguments you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling nag_opt_bounds_no_deriv (e04jbc); the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function nag_opt_init (e04xxc). Values may then be assigned to the structure members in the normal C manner.
Option settings may also be read from a text file using the function nag_opt_read (e04xyc) in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, then this must be done directly in the calling program; they cannot be assigned using nag_opt_read (e04xyc).

### 11.1  Optional Argument Checklist and Default Values

For easy reference, the following list shows the members of options which are valid for nag_opt_bounds_no_deriv (e04jbc) together with their default values where relevant. The number $\epsilon$ is a generic notation for machine precision (see nag_machine_precision (X02AJC)).
 Boolean list Nag_TRUE Nag_PrintType print_level Nag_Soln_Iter char outfile[80] stdout void (*print_fun)() NULL Nag_InitType init_state Nag_Init_None Integer max_iter $50{\mathbf{n}}$ double optim_tol $10\sqrt{\epsilon }$ double linesearch_tol 0.5 (0.0 if ${\mathbf{n}}=1$) double step_max 100000.0 double f_est Boolean local_search Nag_TRUE double *delta size n Integer *state size n double *hesl size $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}\left[{\mathbf{n}}-1\right]/2,1\right)$ double *hesd size n Integer iter Integer nf

### 11.2  Description of the Optional Arguments

 list – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ the argument settings in the call to nag_opt_bounds_no_deriv (e04jbc) will be printed.
 print_level – Nag_PrintType Default $\text{}=\mathrm{Nag_Soln_Iter}$
On entry: the level of results printout produced by nag_opt_bounds_no_deriv (e04jbc). The following values are available:
 $\mathrm{Nag_NoPrint}$ No output. $\mathrm{Nag_Soln}$ The final solution. $\mathrm{Nag_Iter}$ One line of output for each iteration. $\mathrm{Nag_Soln_Iter}$ The final solution and one line of output for each iteration. $\mathrm{Nag_Soln_Iter_Full}$ The final solution and detailed printout at each iteration.
Details of each level of results printout are described in Section 11.3.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$, $\mathrm{Nag_Soln}$, $\mathrm{Nag_Iter}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$.
 outfile – const char[80] Default $\text{}=\mathtt{stdout}$
On entry: the name of the file to which results should be printed. If ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}\left[0\right]=\text{' \0 '}$ then the stdout stream is used.
 print_fun – pointer to function Default $\text{}=\text{}$ NULL
On entry: printing function defined by you; the prototype of ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ is
`void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);`
See Section 11.3.1 below for further details.
 init_state – Nag_InitType Default $\text{}=\mathrm{Nag_Init_None}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}$ specifies which of the arguments objf, g, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ are actually being initialized. Such information will generally reduce the time taken by nag_opt_bounds_no_deriv (e04jbc).
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$
No values are assumed to have been set in any of objf, g, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$. (nag_opt_bounds_no_deriv (e04jbc) will use the unit matrix as the initial estimate of the Hessian matrix.)
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$
The arguments objf and g must contain the value of $F\left(x\right)$ and estimates of its first derivatives at the starting point. All $n$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must have been set to indicate which variables are on their bounds and which are free. The pointer ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ must give the $n$ finite difference intervals. ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ must contain the Cholesky factors of a positive definite approximation to the ${n}_{z}$ by ${n}_{z}$ Hessian matrix for the subspace of free variables. (This option is useful for restarting the minimization process if ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ is reached.)
${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_H_S}$
No values are assumed to have been set in objf or g, but ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ must have been set as for ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$. (This option is useful for starting off a minimization run using second derivative information from a previous, similar, run.)
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$, $\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$.
 max_iter – Integer Default $\text{}=50{\mathbf{n}}$
On entry: the limit on the number of iterations allowed before termination.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
 optim_tol – double Default $\text{}=10\sqrt{\epsilon }$
On entry: the accuracy in $x$ to which the solution is required. If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position prior to a normal exit, is such that
 $x sol - x true < options.optim_tol × 1.0 + x true ,$
where $‖y‖={\left({\sum }_{j=1}^{n}{y}_{j}^{2}\right)}^{1/2}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than 1.0 in modulus and if ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ is set to ${10}^{-5}$, then ${x}_{\mathrm{sol}}$ is usually accurate to about 5 decimal places. (For further details see Section 9.) If the problem is scaled roughly as described in Section 9 and $\epsilon$ is the machine precision, then $\sqrt{\epsilon }$ is probably the smallest reasonable choice for ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$. (This is because, normally, to machine accuracy, $F\left(x+\sqrt{\epsilon }{e}_{j}\right)=F\left(x\right)$ where ${e}_{j}$ is any column of the identity matrix.)
Constraint: $\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
 linesearch_tol – double Default $\text{}=0.5$. (If ${\mathbf{n}}=1$, default $\text{}=0.0$.)
On entry: every iteration of nag_opt_bounds_no_deriv (e04jbc) involves a linear minimization (i.e., minimization of $F\left(x+\alpha p\right)$ with respect to $\alpha$). ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ specifies how accurately these linear minimizations are to be performed. The minimum with respect to $\alpha$ will be located more accurately for small values of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ (say 0.01) than for large values (say 0.9).
Although accurate linear minimizations will generally reduce the number of iterations performed by nag_opt_bounds_no_deriv (e04jbc), they will increase the number of function evaluations required for each iteration. On balance, it is usually more efficient to perform a low accuracy linear minimization.
A smaller value such as 0.01 may be worthwhile:
 (a) if $F\left(x\right)$ can be evaluated unusually quickly (since it may be worth using extra function evaluations to reduce the number of iterations and associated matrix calculations); (b) if $F\left(x\right)$ is a penalty or barrier function arising from a constrained minimization problem (since such problems are very difficult to solve).
If ${\mathbf{n}}=1$, the default for ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}=0.0$ (if the problem is effectively one-dimensional then ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ should be set to 0.0 even though ${\mathbf{n}}>1$; i.e., if for all except one of the variables the lower and upper bounds are equal).
Constraint: $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
 step_max – double Default $\text{}=100000.0$
On entry: an estimate of the Euclidean distance between the solution and the starting point supplied. (For maximum efficiency a slight overestimate is preferable.) nag_opt_bounds_no_deriv (e04jbc) will ensure that, for each iteration,
 $∑ j=1 n x j k - x j k-1 2 1/2 ≤ options.step_max ,$
where $k$ is the iteration number. Thus, if the problem has more than one solution, nag_opt_bounds_no_deriv (e04jbc) is most likely to find the one nearest the starting point. On difficult problems, a realistic choice can prevent the sequence of ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can also help to avoid possible overflow in the evaluation of $F\left(x\right)$. However an underestimate of ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}$ can lead to inefficiency.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
 f_est – double
On entry: an estimate of the function value at the minimum. This estimate is just used for calculating suitable step lengths for starting linear minimizations off, so the choice is not too critical. However, it is better for ${\mathbf{options}}\mathbf{.}{\mathbf{f_est}}$ to be set to an underestimate rather than to an overestimate. If no value is supplied then an initial step length of $1.0$ is used, though this may be reduced to ensure that the bounds are not overstepped.
 local_search – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}$ must specify whether or not you wish a ‘local search’ to be performed when a point is found which is thought to be a constrained minimum.
If ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}=\mathrm{Nag_TRUE}$ and either the quasi-Newton direction of search fails to produce a lower function value or the convergence criteria are satisfied, then a local search will be performed. This may move the search away from a saddle point or confirm that the final point is a minimum.
If ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}=\mathrm{Nag_FALSE}$ there will be no local search when a point is found which is thought to be a minimum.
The amount of work involved in a local search is comparable to twice that required in a normal iteration to minimize $F\left(x+\alpha p\right)$ with respect to $\alpha$. For most problems this will be small (relative to the total time required for the minimization). ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}$ could be set Nag_FALSE if:
 – it is known from the physical properties of a problem that a stationary point will be the required minimum; – a point which is not a minimum could be easily recognized, for example if the value of $F\left(x\right)$ at the minimum is known.
 delta – double * Default memory $\text{}={\mathbf{n}}$
On entry: suitable step lengths for making difference approximations to the partial derivatives of $F\left(x\right)$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ is not allocated memory and ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ then nag_opt_bounds_no_deriv (e04jbc) will allocate memory to ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ and assign a suitable set of difference intervals.
If ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ is allocated memory, i.e., ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ is not NULL, and ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ then difference intervals are assumed to be supplied by ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$.
When ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}\ne \mathrm{Nag_Init_None}$ then ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ must hold the finite difference intervals; these may be the values output from a previous call to nag_opt_bounds_no_deriv (e04jbc).
If you wish to supply difference intervals then the following advice can be given. When the problem is scaled roughly as described in Section 9 and $\epsilon$ is the machine precision, values in the range $\sqrt{\epsilon }$ to ${\epsilon }^{2/3}$ may be suitable. Otherwise, you must choose suitable settings, bearing in mind that, when forward differences are used, the approximation is
 $∂F ∂xj = F x + options.delta[j] × e j - F x options.delta[j]$
where ${e}_{\mathit{j}}$ is the $\mathit{j}$th coordinate direction, for $\mathit{j}=1,2,\dots ,n$.
On exit: the n finite difference intervals used by nag_opt_bounds_no_deriv (e04jbc). If ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ is NULL on entry and ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ then memory will have been automatically allocated to ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}$ and suitable values assigned.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{delta}}\left[j\right]\ge 0.0\text{, ​}{\mathbf{x}}\left[j\right]+{\mathbf{options}}\mathbf{.}{\mathbf{delta}}\left[j\right]\ne {\mathbf{x}}\left[j\right]$.
 state – Integer * Default memory $\text{}={\mathbf{n}}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ need not be set if the default option of ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ is used as n values of memory will be automatically allocated by nag_opt_bounds_no_deriv (e04jbc).
If the option ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ has been chosen, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must point to a minimum of n elements of memory. This memory will already be available if the calling program has used the options structure in a previous call to nag_opt_bounds_no_deriv (e04jbc) with ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ and the same value of n. If a previous call has not been made, you must allocate sufficient memory.
When ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ then ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must specify information about which variables are currently on their bounds and which are free. If ${x}_{j}$ is:
 (a) fixed on its upper bound, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ is $-1$ (b) fixed on its lower bound, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ is $-2$ (c) effectively a constant (i.e., ${l}_{j}={u}_{j}$), ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ is $-3$ (d) free, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}\left[j-1\right]$ gives its position in the sequence of free variables.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ will be initialized by nag_opt_bounds_no_deriv (e04jbc).
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$, ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ must be initialized before nag_opt_bounds_no_deriv (e04jbc) is called.
On exit: ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ gives information as above about the final point given in n.
 hesl – double * Default memory $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}\left[{\mathbf{n}}-1\right]/2,1\right)$
 hesd – double * Default memory $\text{}={\mathbf{n}}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ need not be set if the default of ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ is used as sufficient memory will be automatically allocated by nag_opt_bounds_no_deriv (e04jbc).
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_H_S}$ has been set then ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ must point to a minimum of $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}\left[{\mathbf{n}}-1\right]/2,1\right)$ elements of memory.
${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ must point to at least n elements of memory if ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$ has been chosen.
The appropriate amount of memory will already be available for ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ if the calling program has used the options structure in a previous call to nag_opt_bounds_no_deriv (e04jbc) with ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ and the same value of n. If a previous call has not been made, you must allocate sufficient memory.
${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ are used to store the factors $L$ and $D$ of the current approximation to the matrix of second derivatives with respect to the free variables (see Section 3). (The elements of the matrix are assumed to be ordered according to the permutation specified by the positive elements of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$, see above.) ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ holds the lower triangle of $L$, omitting the unit diagonal, stored by rows. ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ stores the diagonal elements of $D$. Thus if ${n}_{z}$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$ are positive, the strict lower triangle of $L$ will be held in the first ${n}_{z}\left({n}_{z}-1\right)/2$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and the diagonal elements of $D$ in the first ${n}_{z}$ elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_None}$ (the default), ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ will be initialized within nag_opt_bounds_no_deriv (e04jbc) to the factors of the unit matrix.
If you set ${\mathbf{options}}\mathbf{.}{\mathbf{init_state}}=\mathrm{Nag_Init_All}$ or $\mathrm{Nag_Init_H_S}$, ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ must contain on entry the Cholesky factors of a positive definite approximation to the ${n}_{z}$ by ${n}_{z}$ matrix of second derivatives for the subspace of free variables as specified by your setting of ${\mathbf{options}}\mathbf{.}{\mathbf{state}}$.
On exit: ${\mathbf{options}}\mathbf{.}{\mathbf{hesl}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ hold the factors $L$ and $D$ corresponding to the final point given in x. The elements of ${\mathbf{options}}\mathbf{.}{\mathbf{hesd}}$ are useful for deciding whether to accept the result produced by nag_opt_bounds_no_deriv (e04jbc) (see Section 9).
 iter – Integer
On exit: the number of iterations which have been performed in nag_opt_bounds_no_deriv (e04jbc).
 nf – Integer
On exit: the number of times the residuals have been evaluated.

### 11.3  Description of Printed Output

The level of printed output can be controlled with the structure members ${\mathbf{options}}\mathbf{.}{\mathbf{list}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 11.2). If ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ then the argument values to nag_opt_bounds_no_deriv (e04jbc) are listed, whereas the printout of results is governed by the value of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$. The default of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$ provides a single line of output at each iteration and the final result. This section describes all of the possible levels of printout available from nag_opt_bounds_no_deriv (e04jbc).
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Iter}$ or $\mathrm{Nag_Soln_Iter}$ a single line of output is produced on completion of each iteration, this gives the following values:
 Itn the iteration count, $k$. Nfun the cumulative number of objective function evaluations. Objective the value of the objective function, $F\left({x}^{\left(k\right)}\right)$ Norm g the Euclidean norm of the projected gradient vector, $‖{g}_{z}\left({x}^{\left(k\right)}\right)‖$. Norm x the Euclidean norm of ${x}^{\left(k\right)}$. Norm(x(k-1)-x(k)) the Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$. Step the step ${\alpha }^{\left(k\right)}$ taken along the computed search direction ${p}^{\left(k\right)}$. Cond H the ratio of the largest to the smallest element of the diagonal factor $D$ of the projected Hessian matrix. This quantity is usually a good estimate of the condition number of the projected Hessian matrix. (If no variables are currently free, this value will be zero.)
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter_Full}$ more detailed results are given at each iteration. Additional values output are:
 x the current point ${x}^{\left(k\right)}$. g the current estimate of the projected gradient vector, ${g}_{z}\left({x}^{\left(k\right)}\right)$. Status the current state of the variable with respect to its bound(s).
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$ the final result is printed out. This consists of:
 x the final point, ${x}^{*}$. g the final estimate of the projected gradient vector, ${g}_{z}\left({x}^{*}\right)$. Status the final state of the variable with respect to its bound(s).
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$ then printout will be suppressed; you can print the final solution when nag_opt_bounds_no_deriv (e04jbc) returns to the calling program.

#### 11.3.1  Output of results via a user-defined printing function

You may also specify your own print function for output of iteration results and the final solution by use of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ function pointer, which has prototype
`void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);`
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ this will be called in preference to the internal print function of nag_opt_bounds_no_deriv (e04jbc). Calls to the user-defined function are again controlled by means of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ member. Information is provided through st and comm, the two structure arguments to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$.
The results contained in the members of st are those on completion of the last iteration or those after a local search. (An iteration may be followed by a local search (see ${\mathbf{options}}\mathbf{.}{\mathbf{local_search}}$, Section 11.2) in which case ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ is called with the results of the last iteration ($\mathbf{st}\mathbf{\to }\mathbf{local_search}=\mathrm{Nag_FALSE}$) and then again when the local search has been completed ($\mathbf{st}\mathbf{\to }\mathbf{local_search}=\mathrm{Nag_TRUE}$).)
If $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}=\mathrm{Nag_TRUE}$ then the results on completion of an iteration of nag_opt_bounds_no_deriv (e04jbc) are contained in the members of st. If $\mathbf{comm}\mathbf{\to }\mathbf{sol_prt}=\mathrm{Nag_TRUE}$ then the final results from nag_opt_bounds_no_deriv (e04jbc), including details of the final iteration, are contained in the members of st. In both cases, the same members of st are set, as follows:
iterInteger
The current iteration count, $k$, if $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}=\mathrm{Nag_TRUE}$; the final iteration count, $k$, if $\mathbf{comm}\mathbf{\to }\mathbf{sol_prt}=\mathrm{Nag_TRUE}$.
nInteger
The number of variables.
xdouble *
The coordinates of the point ${x}^{\left(k\right)}$.
fdouble *
The value of the current objective function.
gdouble *
The estimated value of $\frac{\partial F}{\partial {x}_{\mathit{j}}}$ at ${x}^{\left(k\right)}$ , for $\mathit{j}=1,2,\dots ,n$.
gpj_normdouble
The Euclidean norm of the current estimate of the projected gradient ${g}_{z}$.
stepdouble
The step ${\alpha }^{\left(k\right)}$ taken along the search direction ${p}^{\left(k\right)}$.
conddouble
The estimate of the condition number of the Hessian matrix.
xk_normdouble
The Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$.
stateInteger *
The status of variables ${x}_{j}$, $j=1,2,\dots ,n$, with respect to their bounds. See Section 3 for a description of the possible status values.
Nag_TRUE if a local search has been performed.
nfInteger
The cumulative number of objective function evaluations.
The relevant members of the structure comm are:
it_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the results of the current iteration.
sol_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the final result.
userdouble *
iuserInteger *
pPointer
Pointers for communication of user information. If used they must be allocated memory either before entry to nag_opt_bounds_no_deriv (e04jbc) or during a call to objfun or ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.