Note:this function usesoptional parametersto define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm, and to Section 12 for a detailed description of the specification of the optional parameters.
e05kbc is designed to find the global minimum or maximum of an arbitrary function, subject to simple bound-constraints using a multi-level coordinate search method. Derivatives are not required, but convergence is only guaranteed if the objective function is continuous in a neighbourhood of a global optimum. It is not intended for large problems.
Here $f$ is a smooth nonlinear function and ${l}_{x}$ and ${u}_{x}$ are $n$-dimensional vectors defining bounds on the variables.
e05kbc serves as a solver for compatible problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for functions in the NAG optimization modelling suite. To define a compatible problem handle, you must call e04rac to initialize it followed, optionally, by e04rgc, e04rec or e04rfc to define the objective function and e04rhc to define bounds on the variables. If e04rhc is not called, all the variables will be considered free by the solver. It should be noted that e05kbc always assumes that the gradient of the objective is dense, therefore, defining a sparse structure for the residuals in the call to e04rgc will have no effect. Additionally, the multi-level coordinate search method used by this solver relies on dividing the feasible space in ‘boxes’ (see Section 11 for a more thorough explanation) so it is advisable to define reasonable bounds for the variables using e04rhc. See Section 4.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.
The algorithm behaviour and solver strategy can be modified by various optional parameters (see Section 12) which can be set by e04zmcande04zpc at any time between the initialization of the handle by e04rac and a call to the solver. The optional parameters' names specific for this solver start with the prefix MCS (Multi-level Coordinate Search). The default values for these optional parameters are chosen to work well in the general case, but it is recommended that you tune them to your particular problem. In particular, if the objective function is known to be numerically difficult, it could be desirable to define a customized initialization list for the algorithm with the optional parameter ${\mathbf{MCS\; Initialization\; Method}}$. For more details on how to create a custom initialization list, please refer to Section 9.2.
Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various initialization lists and/or optional parameters.
The method used by e05kbc is based on MCS, the Multi-level Coordinate Search method described in Huyer and Neumaier (1999), and the algorithm it uses is described in detail in Section 11.
4References
Huyer W and Neumaier A (1999) Global optimization by multi-level coordinate search Journal of Global Optimization14 331–355
5Arguments
1: $\mathbf{handle}$ – void *Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04rac) and to hold a problem formulation compatible with e05kbc. It must not be changed between calls to the NAG optimization modelling suite.
2: $\mathbf{objfun}$ – function, supplied by the userExternal Function
objfun must calculate the value of the nonlinear objective function $f\left(x\right)$ at a specified point $x$. If there is no nonlinear objective (e.g., e04recore04rfc was called to define a linear or quadratic objective function), objfun will never be called by e05kbc and may be NULLFN.
On entry: the vector $x$ of variable values at which the objective function is to be evaluated.
3: $\mathbf{fx}$ – double *Output
On exit: the value of the objective function at $x$.
4: $\mathbf{inform}$ – Integer *Input/Output
On entry: a non-negative value.
On exit: may be used to indicate that the requested objective value could not be computed. Specifically, it can be set to a negative value:
${\mathbf{inform}}<0$
The solver will cleanly exit with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_STOP and return the best available point as well as the solve statistics.
5: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to objfun.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e05kbc you may allocate memory and initialize these pointers with various quantities for use by objfun when called from e05kbc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note:objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e05kbc. If your code inadvertently does return any NaNs or infinities, e05kbc is likely to produce unexpected results.
3: $\mathbf{monit}$ – function, supplied by the userExternal Function
monit is provided to enable you to monitor the progress of the optimization and, optionally, to terminate the solver early if necessary. It is invoked at the end of every $i$th step where $i$ is given by the ${\mathbf{MCS\; Monitor\; Frequency}}$ (the default is $0$, monit is not called). A step is complete when the procedure in which a sub-box is considered for splitting finishes correctly.
On entry: the vector $x$ of decision variables at the current iteration.
3: $\mathbf{inform}$ – Integer *Input/Output
On entry: a non-negative value.
On exit: may be used to request the solver to stop immediately. Specifically, if ${\mathbf{inform}}<0$ the solver will terminate immediately with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_USER_STOP; otherwise, the solver will proceed normally.
On entry: solver statistics at the end of the current iteration as described in stats.
6: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to monit.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e05kbc you may allocate memory and initialize these pointers with various quantities for use by monit when called from e05kbc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note:monit should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e05kbc. If your code inadvertently does return any NaNs or infinities, e05kbc is likely to produce unexpected results.
4: $\mathbf{nvar}$ – IntegerInput
On entry: $n$, the current number of decision variables $x$ in the model.
On exit: solver statistics at the end of the final iteration as given in the table below:
$0$
Number of calls to the objective function.
$1$
Total time spent in the solver (including time spent evaluating the objective).
$2$
Total time spent evaluating the objective function.
$3$
Number of sub-boxes.
$4$
Number of splits.
$5$
Cumulative number of splits by initialization list.
$6$
The current lowest level containing no split boxes.
$7$–$99$
Reserved for future use.
8: $\mathbf{comm}$ – Nag_Comm *
The NAG communication argument (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
9: $\mathbf{fail}$ – NagError *Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
e05kbc returns with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_NOERROR
if the iterates have converged to a point $x$ that satisfies the convergence criteria described in Section 7.
6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
NE_BAD_PARAM
On entry, argument $\u27e8\mathit{\text{value}}\u27e9$ had an illegal value.
NE_DIV_COMPLETE
The division procedure completed but your target value could not be reached.
Despite every sub-box being processed ${\mathbf{MCS\; Splits\; Limit}}$ times, the target value you provided in ${\mathbf{MCS\; Target\; Objective\; Value}}$ could not be found to the tolerances given in ${\mathbf{MCS\; Target\; Objective\; Error}}$ and ${\mathbf{MCS\; Target\; Objective\; Safeguard}}$. You could try reducing ${\mathbf{MCS\; Splits\; Limit}}$ or the objective tolerances.
NE_HANDLE
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been properly initialized or it has been corrupted.
NE_INF_INIT_LIST
A finite initialization list could not be computed internally. Consider reformulating the bounds on the problem, try providing your own initialization list, use the randomization option (${\mathbf{MCS\; Initialization\; Method}}=\mathrm{CUSTOM}$) or vary the value of ${\mathbf{Infinite\; Bound\; Size}}$.
The user-supplied initialization list contained infinite values, as determined by the optional parameter ${\mathbf{Infinite\; Bound\; Size}}$.
NE_INLIST_CLOSE
An error occurred during initialization. It is likely that points from the initialization list are very close together. Try relaxing the bounds on the variables or use a different initialization method.
NE_INT
On entry, user-supplied section $\mathit{list}(i,1:\mathit{numpts}\left[i-1\right])$ contained $\mathit{ndist}$ distinct elements, and $\mathit{ndist}<\mathit{numpts}\left[i-1\right]$: $\mathit{ndist}=\u27e8\mathit{\text{value}}\u27e9$, $\mathit{numpts}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$.
For more details, please refer to Section 9.2.
On entry, user-supplied section $\mathit{list}(i,1:\mathit{numpts}\left[i-1\right])$ was not in ascending order: $\mathit{numpts}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$.
For more details, please refer to Section 9.2.
The dimension of the array ‘MCS List’ is not a multiple of ${\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
NE_INT_2
On entry, user-supplied $\mathit{initpt}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then $\mathit{initpt}\left[\mathit{i}-1\right]\ge 1$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{initpt}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$ and $\mathit{sdlist}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then $\mathit{initpt}\left[\mathit{i}-1\right]\le \mathit{sdlist}$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{numpts}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then $\mathit{numpts}\left[\mathit{i}-1\right]\ge 3$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{numpts}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$ and $\mathit{sdlist}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then $\mathit{numpts}\left[\mathit{i}-1\right]\le \mathit{sdlist}$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$.
For more details, please refer to Section 9.2.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_LINESEARCH_ERROR
An error occurred during linesearching. It is likely that your objective function is badly scaled: try rescaling it. Also, try relaxing the bounds or use a different initialization method. If the problem persists, please contact NAG quoting error code $\u27e8\mathit{\text{value}}\u27e9$.
NE_MISSING_CALL
The optional parameter ${\mathbf{MCS\; Initialization\; Method}}$ was set to CUSTOM
but one of the arrays $\mathit{list}$, $\mathit{initpt}$ or $\mathit{numpts}$ was not passed
correctly to the handle.
For more details, please refer to Section 9.2.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_NOT_PARSED
There were $n=\u27e8\mathit{\text{value}}\u27e9$ variables and the optional parameter ${\mathbf{MCS\; Splits\; Limit}}$$\mathit{smax}=\u27e8\mathit{\text{value}}\u27e9$. Constraint: $\mathit{smax}>n+2$. Use e04zmc to set compatible option values.
NE_NULL_ARGUMENT
The problem requires the objfun values. Please provide a proper objfun function.
NE_PHASE
The problem is already being solved.
NE_REAL_2
On entry, user-supplied $\mathit{list}\left[i\right]\phantom{\rule{0.2em}{0ex}}\left[\mathit{j}\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$, $\mathit{j}=\u27e8\mathit{\text{value}}\u27e9$, and $\mathit{bl}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then $\mathit{list}\left[\mathit{i}\right]\phantom{\rule{0.2em}{0ex}}\left[\mathit{j}\right]\ge \mathit{bl}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$ and $\mathit{j}=1,2,\dots ,\mathit{numpts}\left[\mathit{i}-1\right]$.
For more details, please refer to Section 9.2.
On entry, user-supplied $\mathit{list}\left[i\right]\phantom{\rule{0.2em}{0ex}}\left[\mathit{j}\right]=\u27e8\mathit{\text{value}}\u27e9$, $i=\u27e8\mathit{\text{value}}\u27e9$, $\mathit{j}=\u27e8\mathit{\text{value}}\u27e9$, and $\mathit{bu}\left[i-1\right]=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: if ${\mathbf{x}}\left[i-1\right]$ is not fixed then $\mathit{list}\left[i\right]\phantom{\rule{0.2em}{0ex}}\left[\mathit{j}\right]\le \mathit{bu}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$ and $\mathit{j}=1,2,\dots ,\mathit{numpts}\left[\mathit{i}-1\right]$.
For more details, please refer to Section 9.2.
NE_REF_MATCH
On entry, ${\mathbf{nvar}}=\u27e8\mathit{\text{value}}\u27e9$, expected $\mathrm{value}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: nvar must match the current number of variables of the model in the handle.
NE_SETUP_ERROR
This solver does not support the model defined in the handle.
NE_TOO_MANY_FEVALS
The function evaluations limit was exceeded.
Approximately ${\mathbf{MCS\; Max\; Objective\; Calls}}$ function calls have been made without your chosen termination criterion being satisfied.
NE_USER_STOP
User-supplied monitoring function requested termination.
User-supplied objective function requested termination.
7Accuracy
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$ NE_NOERROR on exit, then the vector returned in the array x is an estimate of the solution $\mathbf{x}$ whose function value satisfies your termination criterion: the function value was static for ${\mathbf{MCS\; Static\; Limit}}$ sweeps through levels, or
where $\mathit{objval}$ is the value of the optional parameter ${\mathbf{MCS\; Target\; Objective\; Value}}$, $\mathit{objerr}$ is the value of the optional parameter ${\mathbf{MCS\; Target\; Objective\; Error}}$, and $\mathit{objsfg}$ is the value of the optional parameter ${\mathbf{MCS\; Target\; Objective\; Safeguard}}$.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
e05kbc is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e05kbc makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
9.1Description of the Printed Output
The solver can print information to give an overview of the problem and the progress of the computation. The output may be sent to two independent
file ID
which are set by optional parameters ${\mathbf{Print\; File}}$ and ${\mathbf{Monitoring\; File}}$. Optional parameters ${\mathbf{Print\; Level}}$, ${\mathbf{Print\; Options}}$, ${\mathbf{Monitoring\; Level}}$ and ${\mathbf{Print\; Solution}}$ determine the exposed level of detail. This allows, for example, a detailed log file to be generated while the condensed information is displayed on the screen.
By default (${\mathbf{Print\; File}}=6$, ${\mathbf{Print\; Level}}=2$), four sections are printed to the standard output: a header, a list of options, an iteration log and a summary.
Header
The header is a message indicating the start of the solver. It should look like:
-------------------------------------------------
E05KB, MCS method for bound constrained problems
-------------------------------------------------
Optional parameters list
If ${\mathbf{Print\; Options}}=\mathrm{YES}$, a list of the optional parameters and their values is printed. The list shows all options of the solver, each displayed on one line. The line contains the option name, its current value and an indicator for how it was set. The options left at their defaults are noted by ‘d’ and the ones you have set are noted by ‘U’. Note that the output format is compatible with the file format expected by e04zpc. The output looks as follows:
Mcs Initialization Method = Simple Bounds * d
Mcs Local Searches = On * d
Mcs Local Searches Limit = 50 * d
Mcs Local Searches Tolerance = 2.22045E-16 * d
Mcs Max Objective Calls = 0 * d
Mcs Monitor Frequency = 0 * d
Mcs Repeatability = Off * d
Mcs Splits Limit = 0 * d
Mcs Static Limit = 0 * d
Mcs Target Objective Error = 1.02648E-04 * d
Mcs Target Objective Safeguard= 1.05367E-08 * d
Mcs Target Objective Value = -1.7977E+308 * d
Mcs Print Frequency = 1 * d
Problem statistics
If ${\mathbf{Print\; Level}}\ge 2$, statistics on the problem are printed, for example:
Problem Statistics
No of variables 4
free (unconstrained) 1
bounded 3
Objective function Nonlinear
Iteration log
If ${\mathbf{Print\; Level}}\ge 2$, the solver will print a summary line for each split. Each line shows the split number, the value of the objective function (obj), the number of sweeps (nsweeps), and the cumulative number of objective function evaluations (nf). The output looks as follows:
Status: Converged, objective function is static
Value of the objective -9.00000E+00
Number of objective function evaluations 81
Number of sweeps 10
Note that only the iterations that decrease the objective function are printed in the iteration log, meaning that objective evaluations are likely to happen between the last printed iteration and the convergence. This leads to a small difference between the last line of the iteration log and the final summary in terms of the number of function evaluations.
Optionally, if ${\mathbf{Stats\; Time}}=\mathrm{YES}$, the timings are printed:
Timings
Total time spent in the solver 0.056
Time spent in the objective evaluation 0.012
Additionally, if ${\mathbf{Print\; Solution}}=\mathrm{YES}$, the solution is printed along with the bounds:
By default, the initialization list as described in Section 11.1 is computed internally by the solver using one of the methods defined by the optional parameter ${\mathbf{MCS\; Initialization\; Method}}$. It is however possible to customize the initialization method by first setting ${\mathbf{MCS\; Initialization\; Method}}$ to $\mathrm{CUSTOM}$ and providing the following information to the handle before the solver call through the utility routines e04rxc and e04rwc:
$\mathit{numpts}$ encodes the number of splitting points in each coordinate direction; e.g.,
for $i=1,\mathrm{...},{\mathbf{nvar}}$, $\mathrm{numpts}\left(i\right)$ is the number of splits intially created in the $i$th direction. The coordinates of the initial boxes must be provided in the argument $\mathit{list}({\mathbf{nvar}}*\mathit{sdlist})$, where sdlist is defined implicitly when list is passed to the problem structure.
To provide the array numpts, the routine e04rwc must be called with the command string ${\mathbf{cmdstr}}=\text{'}\mathrm{MCS\; number\; points}\text{'}$.
Constraint:
$\mathit{numpts}\left(\mathit{i}\right)\ge 3$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$
$\mathit{initpt}$ designates a point stored in $\mathit{list}$ that you wish e05kbc to consider as an ‘initial point’ for the purposes of the splitting procedure. Call this initial point ${\mathit{x}}^{0}$. The coordinates of ${\mathit{x}}^{0}$ correspond to a set of indices ${\mathit{j}}_{i}$, $i=1,\mathrm{...},{\mathbf{nvar}}$, such that ${\mathit{x}}_{i}^{0}$ is stored in $\mathit{list}(i+({j}_{i}-1)\times {\mathbf{nvar}})$. You must set $\mathit{initpt}\left(i\right)={j}_{i}$ for $i=1,\mathrm{...},{\mathbf{nvar}}$.
To provide the array initpt, the routine e04rwc must be called with the command string ${\mathbf{cmdstr}}=\text{'}\mathrm{MCS\; initial\; points}\text{'}$.
Constraint:
$1\le \mathit{initpt}\left(\mathit{i}\right)\le \mathit{sdlist}={\mathrm{max}}_{\mathit{i}}\mathit{numpts}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{nvar}}$
List(${\mathbf{nvar}}\times \mathit{sdlist}$): real array
$\mathit{list}$ is the ‘initialization list’: whenever a sub-box in the algorithm is split for the first time (either during the initialization procedure or later), for each non-fixed coordinate $i$ the split is done at the values $\mathit{list}(i+(j-1)\times {\mathbf{nvar}})$, $j=1,\mathrm{...},\mathit{numpts}\left(i\right)$, as well as at some adaptively chosen intermediate points.
For all $i$, the values $\mathit{list}(i+(j-1)\times {\mathbf{nvar}})$, $j=1,\mathrm{...},\mathit{numpts}\left(i\right)$, must be in ascending order with each entry being distinct.
The dimension sdlist is defined implicitly as $\mathit{sdlist}=\frac{\mathit{dim}\left(\mathit{list}\right)}{{\mathbf{nvar}}}$ when list is passed to the handle.
To provide the array list, the routine e04rxc must be called with the command string ${\mathbf{cmdstr}}=\text{'}\mathrm{MCS\; list}\text{'}$.
Constraint:
for all $j=1,\mathrm{...},\mathit{numpts}\left(\mathit{i}\right)$ and $i=1,\mathrm{...},{\mathbf{nvar}}$, $\mathit{bl}\left(\mathit{i}\right)\le \mathit{list}(\mathit{i}+(j-1)\times {\mathbf{nvar}})\le \mathit{bu}\left(\mathit{i}\right)$, where bl and bu are the bounds defined on the variables.
10Example
This example finds the global minimum of the ‘peaks’ function in two dimensions
on the box $[\mathrm{-3},3]\times [\mathrm{-3},3]$.
The function $F$ has several local minima and one global minimum in the given box. The global minimum is approximately located at $(0.23,-1.63)$, where the function value is approximately $-6.55$.
Here we summarise the main features of the MCS algorithm used in e05kbc, and we introduce some terminology used in the description of the function and its arguments. We assume throughout that we will only do any work in coordinates $i$ in which ${x}_{i}$ is free to vary. The MCS algorithm is fully described in Huyer and Neumaier (1999).
11.1Initialization and Sweeps
Each sub-box is determined by a basepoint $\mathbf{x}$ and an opposite point$\mathbf{y}$. We denote such a sub-box by $B[\mathbf{x},\mathbf{y}]$. The basepoint is allowed to belong to more than one sub-box, is usually a boundary point, and is often a vertex.
An initialization procedure produces an initial set of sub-boxes. Whenever a sub-box is split along a coordinate $i$ for the first time (in the initialization procedure or later), the splitting is done at three or more user-defined values ${\left\{{x}_{i}^{j}\right\}}_{j}$ at which the objective function is sampled, and at some adaptively chosen intermediate points. At least four children are generated. More precisely, we assume that we are given
and a vector $\mathbf{p}$ that, for each $i$, locates within ${\left\{{x}_{i}^{j}\right\}}_{j}$ the $i$th coordinate of an initial point${\mathbf{x}}^{0}$; that is, if ${x}_{i}^{0}={x}_{i}^{j}$ for some $j=1,2,\dots ,{L}_{i}$, then ${p}_{i}=j$. A good guess for the global optimum can be used as ${\mathbf{x}}^{0}$.
The initialization points and the vectors $\mathbf{\ell}$ and $\mathbf{p}$ are collectively called the initialization list (and sometimes we will refer to just the initialization points as ‘the initialization list’, whenever this causes no confusion). The initialization data may be input by you, or they can be set to sensible default values by e05kbc: if you provide them yourself, the arrays $\mathit{list}(i+(j-1)\times {\mathbf{nvar}})$ should contain ${x}_{i}^{j}$, $\mathit{numpts}\left[i-1\right]$ should contain ${L}_{i}$, and $\mathit{initpt}\left[i-1\right]$ should contain
${p}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,{L}_{\mathit{i}}$; please refer to Section 9.2 for more information on how to define a custom initialization list. If you wish e05kbc to use one of its preset initialization methods, you could choose one of two simple, three-point methods (see Figure 1). If the list generated by one of these methods contains infinite values, attempts are made to generate a safeguarded list using the function $\mathrm{subint}(x,y)$ (which is also used during the splitting procedure, and is described in Section 11.2). If infinite values persist, e05kbc exits with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_INF_INIT_LIST. There is also the option to generate an initialization list with the aid of linesearches (by setting ${\mathbf{MCS\; Initialization\; Method}}=\mathrm{simple\; off-bounds}$). Starting with the absolutely smallest point in the root box, linesearches are made along each coordinate. For each coordinate, the local minimizers found by the linesearches are put into the initialization list. If there were fewer than three minimizers, they are augmented by close-by values. The final preset initialization option (${\mathbf{MCS\; Initialization\; Method}}=\mathrm{random}$) generates a randomized list, so that independent multiple runs may be made if you suspect a global optimum has not been found. Depending on whether you set the optional parameter ${\mathbf{MCS\; Repeatability}}$ to $\mathrm{ON}$ or $\mathrm{OFF}$, the random state is initialized to give a repeatable or non-repeatable sequence. The components of list are then generated, from a uniform distribution on the root box if the box is finite, or else in a safeguarded fashion if any bound is infinite.
Given an initialization list (preset or otherwise), e05kbc evaluates $F$ at ${\mathbf{x}}^{0}$ and sets the initial estimate of the global minimum, ${\mathbf{x}}^{*}$, to ${\mathbf{x}}^{0}$. Then, for $i=1,2,\dots ,n$, the objective function $F$ is evaluated at ${L}_{i}-1$ points that agree with ${\mathbf{x}}^{*}$ in all but the $i$th coordinate. We obtain pairs
$({\hat{\mathbf{x}}}^{\mathit{j}},{f}_{i}^{\mathit{j}})$, for $\mathit{j}=1,2,\dots ,{L}_{i}$, with: ${\mathbf{x}}^{*}={\hat{\mathbf{x}}}^{{j}_{1}}$, say; with, for $j\ne {j}_{1}$,
The point having the smallest function value is renamed ${\mathbf{x}}^{*}$ and the procedure is repeated with the next coordinate.
Once e05kbc has a full set of initialization points and function values, it can generate an initial set of sub-boxes. Recall that the root box is $B[\mathbf{x},\mathbf{y}]=[\mathbf{\ell},\mathbf{u}]$, having basepoint $\mathbf{x}={\mathbf{x}}^{0}$. The opposite point $\mathbf{y}$ is a corner of $[\mathbf{\ell},\mathbf{u}]$ farthest away from $\mathbf{x}$, in some sense. The point $\mathbf{x}$ need not be a vertex of $[\mathbf{\ell},\mathbf{u}]$, and $\mathbf{y}$ is entitled to have infinite coordinates. We loop over each coordinate $i$, splitting the current box along coordinate $i$ into $2{L}_{i}-2$, $2{L}_{i}-1$ or $2{L}_{i}$ sub-intervals with exactly one of the ${\hat{x}}_{i}^{j}$ as endpoints, depending on whether two, one or none of the ${\hat{x}}_{i}^{j}$ are on the boundary. Thus, as well as splitting at
${\hat{x}}_{i}^{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,{L}_{i}$, we split at additional points
${z}_{i}^{\mathit{j}}$, for $\mathit{j}=2,3,\dots ,{L}_{i}$. These additional ${z}_{i}^{j}$ are such that
where $q$ is the golden-section ratio $(\sqrt{5}-1)/2$, and the exponent $m$ takes the value $1$ or $2$, chosen so that the sub-box with the smaller function value gets the larger fraction of the interval. Each child sub-box gets as basepoint the point obtained from ${\mathbf{x}}^{*}$ by changing ${x}_{i}^{*}$ to the ${x}_{i}^{j}$ that is a boundary point of the corresponding $i$th coordinate interval; this new basepoint, therefore, has function value ${f}_{i}^{j}$. The opposite point is derived from $\mathbf{y}$ by changing ${y}_{i}$ to the other end of that interval.
e05kbc can now rank the coordinates based on an estimated variability of $F$. For each $i$ we compute the union of the ranges of the quadratic interpolant through any three consecutive ${\hat{x}}_{i}^{j}$, taking the difference between the upper and lower bounds obtained as a measure of the variability of $F$ in coordinate $i$. A vector $\mathbf{\pi}$ is populated in such a way that coordinate $i$ has the ${\pi}_{i}$th highest estimated variability. For tiebreaks, when the ${\mathbf{x}}^{*}$ obtained after splitting coordinate $i$ belongs to two sub-boxes, the one that contains the minimizer of the quadratic models is designated the current sub-box for coordinate $i+1$.
Boxes are assigned levels in the following manner. The root box is given level $1$. When a sub-box of level $s$ is split, the child with the smaller fraction of the golden-section split receives level $s+2$; all other children receive level $s+1$. The box with the better function value is given the larger fraction of the splitting interval and the smaller level because then it is more likely to be split again more quickly. We see that after the initialization procedure the first level is empty and the non-split boxes have levels $2,\dots ,{n}_{r}+2$, so it is meaningful to choose ${s}_{\mathrm{max}}$ much larger than ${n}_{r}$. Note that the internal structure of e05kbc demands that ${s}_{\mathrm{max}}$ be at least ${n}_{r}+3$.
Examples of initializations in two dimensions are given in Figure 1. In both cases the initial point is ${\mathbf{x}}^{0}=(\mathbf{\ell}+\mathbf{u})/2$; on the left the initialization points are
In Figure 1, basepoints and levels after initialization are displayed. Note that these initialization lists correspond to ${\mathbf{MCS\; Initialization\; Method}}=\mathrm{simple\; bounds}$ and ${\mathbf{MCS\; Initialization\; Method}}=\mathrm{simple\; off-bounds}$, respectively.
Figure 1: Examples of the initialization procedure
After initialization, a series of sweeps through levels is begun. A sweep is defined by three steps:
(i)scan the list of non-split sub-boxes. Fill a record list$\mathbf{b}$ according to ${b}_{s}=0$ if there is no box at level $s$, and with ${b}_{s}$ pointing to a sub-box with the lowest function value among all sub-boxes with level $s$ otherwise, for $0<s<{s}_{\mathrm{max}}$;
(ii)the sub-box with label ${b}_{s}$ is a candidate for splitting. If the sub-box is not to be split, according to the rules described in Section 11.2, increase its level by $1$ and update ${b}_{s+1}$ if necessary. If the sub-box is split, mark it so, insert its children into the list of sub-boxes, and update $\mathbf{b}$ if any child with level ${s}^{\prime}$ yields a strict improvement of $F$ over those sub-boxes at level ${s}^{\prime}$;
(iii)increment $s$ by $1$. If $s={s}_{\mathrm{max}}$ then display the monitoring information and start a new sweep; else if ${b}_{s}=0$ then repeat this step; else display monitoring information and go to the previous step.
Clearly, each sweep ends after at most ${s}_{\mathrm{max}}-1$ visits of the third step.
11.2Splitting
Each sub-box is stored by e05kbc as a set of information about the history of the sub-box: the label of its parent, a label identifying which child of the parent it is, etc. Whenever a sub-box $B[\mathbf{x},\mathbf{y}]$ of level $s<{s}_{\mathrm{max}}$ is a candidate for splitting, as described in Section 11.1, we recover $\mathbf{x}$, $\mathbf{y}$, and the number, ${n}_{j}$, of times coordinate $j$ has been split in the history of $B$. Sub-box $B$ could be split in one of two ways.
(i)Splitting by rank
If $s>2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{j}+1\right)\right)$, the box is always split. The splitting index is set to a coordinate $i$ such that ${n}_{i}=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{j}\right)$.
(ii)Splitting by expected gain
If $s\le 2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{j}+1\right)\right)$, the sub-box could be split along a coordinate where a maximal gain in function value is expected. This gain is estimated according to a local separable quadratic model obtained by fitting to $2{n}_{r}+1$ function values. If the expected gain is too small the sub-box is not split at all, and its level is increased by $1$.
Eventually, a sub-box that is not eligible for splitting by expected gain will reach level $2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{j}+1\right)\right)+1$ and then be split by rank, as long as ${s}_{\mathrm{max}}$ is large enough. As ${s}_{\mathrm{max}}\to \infty $, the rule for splitting by rank ensures that each coordinate is split arbitrarily often.
Before describing the details of each splitting method, we introduce the procedure for correctly handling splitting at adaptive points and for dealing with unbounded intervals. Suppose we want to split the $i$th coordinate interval $\u25af\{{x}_{i},{y}_{i}\}$, where we define $\u25af\{{x}_{i},{y}_{i}\}=[\mathrm{min}\phantom{\rule{0.125em}{0ex}}({x}_{i},{y}_{i}),\mathrm{max}\phantom{\rule{0.125em}{0ex}}({x}_{i},{y}_{i})]$, for ${x}_{i}\in R$ and ${y}_{i}\in \overline{R}$, and where $\mathbf{x}$ is the basepoint of the sub-box being considered. The descendants of the sub-box should shrink sufficiently fast, so we should not split too close to ${x}_{i}$. Moreover, if ${y}_{i}$ is large we want the new splitting value to not be too large, so we force it to belong to some smaller interval $\u25af\{{\xi}^{\prime},{\xi}^{\prime \prime}\}$, determined by
Consider a sub-box $B$ with level $s>2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{j}+1\right)\right)$. Although the sub-box has reached a high level, there is at least one coordinate along which it has not been split very often. Among the $i$ such that ${n}_{i}=\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{j}\right)$ for $B$, select the splitting index to be the coordinate with the lowest ${\pi}_{i}$ (and hence highest variability rank). ‘Splitting by rank’ refers to the ranking of the coordinates by ${n}_{i}$ and ${\pi}_{i}$.
If ${n}_{i}=0$, so that $B$ has never been split along coordinate $i$, the splitting is done according to the initialization list and the adaptively chosen golden-section split points, as described in Section 11.1. Also as covered there, new basepoints and opposite points are generated. The children having the smaller fraction of the golden-section split (that is, those with larger function values) are given level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\{s+2,{s}_{\mathrm{max}}\}$. All other children are given level $s+1$.
Otherwise, $B$ ranges between ${x}_{i}$ and ${y}_{i}$ in the $i$th coordinate direction. The splitting value is selected to be ${z}_{i}={x}_{i}+2(\mathrm{subint}({x}_{i},{y}_{i})-{x}_{i})/3$; we are not attempting to split based on a large reduction in function value, merely in order to reduce the size of a large interval, so ${z}_{i}$ may not be optimal. Sub-box $B$ is split at ${z}_{i}$ and the golden-section split point, producing three parts and requiring only one additional function evaluation, at the point ${\mathbf{x}}^{\prime}$ obtained from $\mathbf{x}$ by changing the $i$th coordinate to ${z}_{i}$. The child with the smaller fraction of the golden-section split is given level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\{s+2,{s}_{\mathrm{max}}\}$, while the other two parts are given level $s+1$. Basepoints are assigned as follows: the basepoint of the first child is taken to be $\mathbf{x}$, and the basepoint of the second and third children is the point ${\mathbf{x}}^{\prime}$. Opposite points are obtained by changing ${y}_{i}$ to the other end of the $i$th coordinate-interval of the corresponding child.
11.2.2Splitting by expected gain
When a sub-box $B$ has level $s\le 2{n}_{r}\left(\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({n}_{j}+1\right)\right)$, we compute the optimal splitting index and splitting value from a local separable quadratic used as a simple local approximation of the objective function. To fit this curve, for each coordinate we need two additional points and their function values. Such data may be recoverable from the history of $B$: whenever the $i$th coordinate was split in the history of $B$, we obtained values that can be used for the current quadratic interpolation in coordinate $i$.
We loop over $i$; for each coordinate we pursue the history of $B$ back to the root box, and we take the first two points and function values we find, since these are expected to be closest to the current basepoint $\mathbf{x}$. If the current coordinate has not yet been split we use the initialization list. Then we generate a local separable model $e\left(\mathbf{\xi}\right)$ for $F\left(\mathbf{\xi}\right)$ by interpolation at $\mathbf{x}$ and the $2{n}_{r}$ additional points just collected:
We define the expected gain${\hat{e}}_{i}$ in function value when we evaluate at a new point obtained by changing coordinate $i$ in the basepoint, for each $i$, based on two cases:
we can approximate the maximal gain expected when changing ${x}_{i}$ only. We will choose the splitting value from $\u25af\{{\xi}^{\prime},{\xi}^{\prime \prime}\}$. We compute
where ${f}_{\mathrm{best}}$ is the current best function value (including those function values obtained by local optimization), we expect the sub-box to contain a better point and so we split it, using as splitting index the component with minimal ${\hat{e}}_{i}$. Equation (1) prevents wasting function calls by avoiding splitting sub-boxes whose basepoints have bad function values. These sub-boxes will eventually be split by rank anyway.
We now have a splitting index and a splitting value ${z}_{i}$. The sub-box is split at ${z}_{i}$ as long as ${z}_{i}\ne {y}_{i}$, and at the golden-section split point; two or three children are produced. The larger fraction of the golden-section split receives level $s+1$, while the smaller fraction receives level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\{s+2,{s}_{\mathrm{max}}\}$. If it is the case that ${z}_{i}\ne {y}_{i}$ and the third child is larger than the smaller of the two children from the golden-section split, the third child receives level $s+1$. Otherwise it is given the level $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\{s+2,{s}_{\mathrm{max}}\}$. The basepoint of the first child is set to $\mathbf{x}$, and the basepoint of the second (and third if it exists) is obtained by changing the $i$th coordinate of $\mathbf{x}$ to ${z}_{i}$. The opposite points are again derived by changing ${y}_{i}$ to the other end of the $i$th coordinate interval of $B$.
If equation (1) does not hold, we expect no improvement. We do not split, and we increase the level of $B$ by $1$.
11.3Local Search
The local optimization algorithm used by e05kbc uses linesearches along directions that are determined by minimizing quadratic models, all subject to bound constraints. Triples of vectors are computed using coordinate searches based on linesearches. These triples are used in triple search procedures to build local quadratic models for $F$. A trust region-type approach to minimize these models is then carried out, and more information about the coordinate search and the triple search can be found in Huyer and Neumaier (1999).
The local search starts by looking for better points without being too local, by making a triple search using points found by a coordinate search. This yields a new point and function value, an approximation of the gradient of the objective, and an approximation of the Hessian of the objective. Then the quadratic model for $F$ is minimized over a small box, with the solution to that minimization problem then being used as a linesearch direction to minimize the objective. A measure $r$ is computed to quantify the predictive quality of the quadratic model.
The third stage is the checking of termination criteria. The local search will stop if more than $\mathit{loclim}$ visits to this part of the local search have occurred, where $\mathit{loclim}$ is the value of the optional parameter ${\mathbf{MCS\; Local\; Searches\; Limit}}$. If that is not the case, it will stop if the limit on function calls has been exceeded (see the description of the optional parameter ${\mathbf{MCS\; Max\; Objective\; Calls}}$). The final criterion checks if no improvement can be made to the function value, or whether the approximated gradient $\mathbf{g}$ is small, in the sense that
The vector ${\mathbf{x}}_{\mathrm{old}}$ is the best point at the start of the current loop in this iterative local-search procedure, the constant $\mathit{loctol}$ is the value of the optional parameter ${\mathbf{MCS\; Local\; Searches\; Tolerance}}$, $f$ is the objective value at $\mathbf{x}$, and ${f}_{0}$ is the smallest function value found by the initialization procedure.
Next, e05kbc attempts to move away from the boundary, if any components of the current point lie there, using linesearches along the offending coordinates. Local searches are terminated if no improvement could be made.
The fifth stage carries out another triple search, but this time it does not use points from a coordinate search, rather points lying within the trust region box are taken.
The final stage modifies the trust region box to be bigger or smaller, depending on the quality of the quadratic model, minimizes the new quadratic model on that box, and does a linesearch in the direction of the minimizer. The value of $r$ is updated using the new data, and then we go back to the third stage (checking of termination criteria).
The Hessians of the quadratic models generated by the local search may not be positive definite, so e05kbc uses a general nonlinear optimizer to minimize the models.
12Optional Parameters
Several optional parameters in e05kbc define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e05kbc these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmc anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
the keywords;
a parameter value,
where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively;
the default value, where the symbol $\epsilon $ is a generic notation for machine precision (see X02AJC).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
MCS Initialization Method
$a$
Default $\text{}=\mathrm{SIMPLE\; BOUNDS}$
The solver provides several methods to produce the initial set of sub-boxes. When ${\mathbf{MCS\; Initialization\; Method}}=\mathrm{SIMPLE\; BOUNDS}$, $\mathrm{SIMPLE\; OFF-BOUNDS}$, $\mathrm{LINESEARCH}$ or $\mathrm{RANDOM}$, e05kbc will generate the sub-boxes internally. Please refer to Section 11.1 for more details about the differences between the automatic methods.
It is also possible to create a custom initialization list by setting ${\mathbf{MCS\; Initialization\; Method}}=\mathrm{CUSTOM}$. Section 9.2 describes how to provide the custom initialization list to the solver.
This puts an approximate limit on the number of function calls allowed. The total number of calls made is checked at the top of an internal iteration loop, so it is possible that a few calls more than ${\mathbf{MCS\; Max\; Objective\; Calls}}$ may be made.
This parameter specifies the frequency with which to print information regarding each sweep to ${\mathbf{Print\; File}}$ and/or ${\mathbf{Monitoring\; File}}$. By default, it will print information of every sweep.
If you want to try to accelerate convergence of e05kbc by starting local searches from candidate minima, you will require ${\mathbf{MCS\; Local\; Searches}}$ to be $\mathrm{ON}$.
Constraint: ${\mathbf{MCS\; Local\; Searches}}=\mathrm{ON}$ or $\mathrm{OFF}$
MCS Local Searches Limit
$i$
Default $\text{}=50$
This defines the maximal number of iterations to be used in the trust region loop of the local-search procedure.
The value of ${\mathbf{MCS\; Local\; Searches\; Tolerance}}$ is the multiplier used during local searches as a stopping criterion for when the approximated gradient is small, in the sense described in Section 11.3.
For use with random initialization lists (${\mathbf{MCS\; Initialization\; Method}}=\mathrm{random}$). When set to $\mathrm{ON}$, The initialization list will be consistent for every call of e05kbc.
Constraint: ${\mathbf{MCS\; Repeatability}}=\mathrm{ON}$ or $\mathrm{OFF}$
MCS Splits Limit
$i$
Default $\text{}=\lfloor d({n}_{r}+2)/3\rfloor $
Along with the initialization list, this defines a limit on the number of times the root box will be split along any single coordinate direction. If ${\mathbf{MCS\; Local\; Searches}}$ is $\mathrm{OFF}$ you may find the default value to be too small.
As the default termination criterion, computation stops when the best function value is static for ${\mathbf{MCS\; Static\; Limit}}$ sweeps through levels. This parameter is ignored if you have specified a target value to reach in ${\mathbf{MCS\; Target\; Objective\; Value}}$.
Constraint: ${\mathbf{MCS\; Static\; Limit}}>0$
MCS Target Objective Error
$r$
Default $\text{}={\epsilon}^{\frac{1}{4}}$
If you have given a target objective value to reach in ${\mathbf{MCS\; Target\; Objective\; Value}}$, this optional parameter sets your desired relative error between F(x) and ${\mathbf{MCS\; Target\; Objective\; Value}}$, as described in Section 7. See also the description of the optional parameter ${\mathbf{MCS\; Target\; Objective\; Safeguard}}$.
If you have given a target objective value to reach in ${\mathbf{MCS\; Target\; Objective\; Value}}$ (the value of the optional parameter ${\mathbf{MCS\; Target\; Objective\; Value}}$), this optional parameter sets your desired safeguarded termination tolerance, for when ${\mathbf{MCS\; Target\; Objective\; Value}}$ is close to zero.
This parameter may be set if you wish e05kbc to use a specific value as the target function value to reach during the optimization. Setting ${\mathbf{MCS\; Target\; Objective\; Value}}$ overrides the default termination criterion determined by the optional parameter ${\mathbf{MCS\; Target\; Objective\; Value}}$.
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will be regarded as $+\infty $ (and similarly any lower bound less than or equal to $-\mathit{bigbnd}$ will be regarded as $-\infty $). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
(See Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If $i\ge 0$, the
Nag_FileID number (as returned from x04acc)
for the secondary (monitoring) output. If ${\mathbf{Monitoring\; File}}=\mathrm{-1}$, no secondary output is provided. The information output to this file ID is controlled by ${\mathbf{Monitoring\; Level}}$.
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with ${\mathbf{Print\; Level}}$.
(See Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
If $i\ge 0$, the
Nag_FileID number (as returned from x04acc, stdout as the default)
for the primary output of the solver. If ${\mathbf{Print\; File}}=\mathrm{-1}$, the primary output is completely turned off independently of other settings. The information output to this unit is controlled by ${\mathbf{Print\; Level}}$.
This parameter defines how detailed information should be printed by the solver to the primary and secondary output.
$\mathit{i}$
Output
$0$
No output from the solver.
$1$
The Header and Summary.
$2$, $3$, $4$, $5$
Additionally, the Iteration log.
Constraint: $0\le {\mathbf{Print\; Level}}\le 5$.
Print Options
$a$
Default $=\mathrm{YES}$
If ${\mathbf{Print\; Options}}=\mathrm{YES}$, a listing of optional parameters will be printed to the primary output and is always printed to the secondary output.
Constraint: ${\mathbf{Print\; Options}}=\mathrm{YES}$ or $\mathrm{NO}$.
Print Solution
$a$
Default $=\mathrm{NO}$
If ${\mathbf{Print\; Solution}}=\mathrm{YES}$, the solution will be printed to the primary and secondary output.
Constraint: ${\mathbf{Print\; Solution}}=\mathrm{YES}$ or $\mathrm{NO}$.
Stats Time
$a$
Default $=\mathrm{NO}$
This parameter turns on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice $\mathrm{YES}$ is equivalent to $\mathrm{WALL\; CLOCK}$.
Constraint: ${\mathbf{Stats\; Time}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{CPU}$ or $\mathrm{WALL\; CLOCK}$.
Task
$a$
Default $=\mathrm{MINIMIZE}$
This parameter specifies the required direction of the optimization. If ${\mathbf{Task}}=\mathrm{FEASIBLEPOINT}$, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance.
Constraint: ${\mathbf{Task}}=\mathrm{MINIMIZE}$, $\mathrm{MAXIMIZE}$ or $\mathrm{FEASIBLE\; POINT}$.