e05usc is designed to find the global minimum of an arbitrary smooth sum of squares function subject to constraints (which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints) by generating a number of different starting points and performing a local search from each using sequential quadratic programming.
The function may be called by the names: e05usc or nag_glopt_nlp_multistart_sqp_lsq.
Before calling e05usc, the optional parameter arrays iopts and opts
MUST
be initialized for use with e05usc by calling e05zkc with optstr set to
‘Initialize = e05usc’.
Optional parameters may subsequently be specified by calling e05zkc before the call to e05usc.
3Description
The problem is assumed to be stated in the following form:
(1)
where (the objective function) is a nonlinear function which can be represented as the sum of squares of subfunctions , the are constant, is an constant linear constraint matrix, and is an element vector of nonlinear constraint functions. (The matrix and the vector may be empty.) The objective function and the constraint functions are assumed to be smooth, i.e., at least twice-continuously differentiable. (This function will usually solve (1) if any isolated discontinuities are away from the solution.)
e05usc solves a user-specified number of local optimization problems with different starting points. You may specify the starting points via the function start. If a random number generator is used to generate the starting points then the argument repeat1 allows you to specify whether a repeatable set of points are generated or whether different starting points are generated on different calls. The resulting local minima are ordered and the best nb results returned in order of ascending values of the resulting objective function values at the minima. Thus the value returned in position will be the best result obtained. If a sufficiently high number of different points are chosen then this is likely to be the global minimum.
4References
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems187 Springer–Verlag
5Arguments
1: – IntegerInput
On entry: , the number of subfunctions associated with .
Constraint:
.
2: – IntegerInput
On entry: , the number of variables.
Constraint:
.
3: – IntegerInput
On entry: , the number of general linear constraints.
Constraint:
.
4: – IntegerInput
On entry: , the number of nonlinear constraints.
Constraint:
.
5: – const doubleInput
Note: the dimension, dim, of the array a
must be at least
when .
where appears in this document, it refers to the array element
.
On entry: the matrix of general linear constraints in (1). That is, must contain the th coefficient of the th general linear constraint, for and . If then a may be specified as NULL.
6: – IntegerInput
On entry: the stride separating matrix row elements in the array a.
Constraint:
.
7: – const doubleInput
8: – const doubleInput
On entry: bl must contain the lower bounds and bu the upper bounds for all the constraints in the following order. The first elements of each array must contain the bounds on the variables, the next elements the bounds for the general linear constraints (if any) and the next elements the bounds for the general nonlinear constraints (if any). To specify a nonexistent lower bound (i.e., ), set , and to specify a nonexistent upper bound (i.e., ), set ; the default value of is , but this may be changed by the optional parameter . To specify the th constraint as an equality, set , say, where .
Constraints:
, for ;
if , .
9: – const doubleInput
On entry: the coefficients of the constant vector of the objective function.
10: – function, supplied by the userExternal Function
confun must calculate the vector of nonlinear constraint functions and (optionally) its Jacobian () for a specified -element vector . If there are no nonlinear constraints (i.e., ), confun will never be called by e05usc and If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
On entry: indicates which values must be assigned during each call of confun. Only the following values need be assigned, for each value of such that :
, the th nonlinear constraint.
All available elements in , for (see cjsl for the definition of CJSL).
and all available elements in , for (see cjsl for the definition of CJSL).
On exit: may be set to a negative value if you wish to abandon the solution to the current local minimization problem. In this case e05usc will move to the next local minimization problem.
2: – IntegerInput
On entry: , the number of nonlinear constraints.
3: – IntegerInput
On entry: , the number of variables.
4: – IntegerInput
On entry: the stride separating matrix row elements in the array cjsl.
5: – const IntegerInput
On entry: the indices of the elements of c and/or cjsl that must be evaluated by confun. If , and/or the available elements of , for (see argument mode) must be evaluated at . See cjsl for the definition of CJSL.
6: – const doubleInput
On entry: , the vector of variables at which the constraint functions and/or the available elements of the constraint Jacobian are to be evaluated.
7: – doubleOutput
On exit: if and or , must contain the value of . The remaining elements of c, corresponding to the non-positive elements of needc, need not be set.
where appears in this document, it refers to the array element
.
CJSL may be regarded as a two-dimensional ‘slice’ in column order of the three-dimensional matrix CJAC stored in the array cjac of e05usc.
On entry: unless or , the elements of cjsl are set to special values which enable e05usc to detect whether they are changed by confun.
On exit: if and or , , for , must contain the available elements of the vector given by
where is the partial derivative of the th constraint with respect to the th variable, evaluated at the point . See also the argument nstate. The remaining , for , corresponding to non-positive elements of needc, need not be set.
If all elements of the constraint Jacobian are known (i.e., or ), any constant elements may be assigned to cjsl one time only at the start of each local optimization. An element of cjsl that is not subsequently assigned in confun will retain its initial value throughout the local optimization. Constant elements may be loaded into cjsl during the first call to confun for the local optimization (signalled by the value ). The ability to preload constants is useful when many Jacobian elements are identically zero, in which case cjsl may be initialized to zero and nonzero elements may be reset by confun.
Note that constant nonzero elements do affect the values of the constraints. Thus, if is set to a constant value, it need not be reset in subsequent calls to confun, but the value must nonetheless be added to . For example, if and then the term must be included in the definition of .
It must be emphasized that, if or , unassigned elements of cjsl are not treated as constant; they are estimated by finite differences, at nontrivial expense. If you do not supply a value for the optional parameter , an interval for each element of is computed automatically at the start of each local optimization. The automatic procedure can usually identify constant elements of cjsl, which are then computed once only by finite differences.
9: – IntegerInput
On entry: if then e05usc is calling confun for the first time on the current local optimization problem. This argument setting allows you to save computation time if certain data must be read or calculated only once.
10: – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to confun.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e05usc you may allocate memory and initialize these pointers with various quantities for use by confun when called from e05usc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note:confun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e05usc. If your code inadvertently does return any NaNs or infinities, e05usc is likely to produce unexpected results.
confun should be tested separately before being used in conjunction with e05usc. See also the description of the optional parameter
.
11: – function, supplied by the userExternal Function
objfun must calculate either the th element of the vector or all elements of and (optionally) its Jacobian () for a specified -element vector .
On exit: may be set to a negative value if you wish to abandon the solution to the current local minimization problem. In this case e05usc will move to the next local minimization problem.
2: – IntegerInput
On entry: , the number of subfunctions.
3: – IntegerInput
On entry: , the number of variables.
4: – IntegerInput
On entry: the stride separating matrix row elements in the array fjsl.
5: – IntegerInput
On entry: if , only the th element of needs to be evaluated at ; the remaining elements need not be set. This can result in significant computational savings when .
6: – const doubleInput
On entry: , the vector of variables at which the objective function and/or all available elements of its gradient are to be evaluated.
FJSL may be regarded as a two-dimensional ‘slice’ in column order of the three-dimensional matrix FJAC stored in the array fjac of e05usc.
On entry: is set to a special value.
On exit: if or and , the th row of fjsl must contain the available elements of the vector given by
evaluated at the point . See also the argument nstate.
9: – IntegerInput
On entry: if then e05usc is calling objfun for the first time on the current local optimization problem. This argument setting allows you to save computation time if certain data must be read or calculated only once.
10: – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to objfun.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e05usc you may allocate memory and initialize these pointers with various quantities for use by objfun when called from e05usc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
Note:objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e05usc. If your code inadvertently does return any NaNs or infinities, e05usc is likely to produce unexpected results.
objfun should be tested separately before being used in conjunction with e05usc. See also the description of the optional parameter
.
12: – IntegerInput
On entry: the number of different starting points to be generated and used. The more points used, the more likely that the best returned solution will be a global minimum.
Constraint:
.
13: – doubleOutput
Note: the dimension, dim, of the array x
must be at least
.
where appears in this document, it refers to the array element
.
On exit: contains the final estimate of the th solution, for .
14: – IntegerInput
On entry: the first dimension of X as stored in the array x.
Constraint:
.
15: – function, supplied by the userExternal Function
start must calculate the npts starting points to be used by the local optimizer. If you do not wish to write a function specific to your problem then you can specify the NAG defined null void function pointer, NULLFN in the call. In this case, a default function uses the NAG quasi-random number generators to distribute starting points uniformly across the domain. It is affected by the value of repeat1.
On entry: indicates the number of starting points.
2: – doubleInput/Output
Note: where appears in this document, it refers to the array element
.
On entry: all elements of quas will have been set to zero, so only nonzero values need be set subsequently.
On exit: must contain the starting points for the npts local minimizations, i.e., must contain the th component of the th starting point.
3: – IntegerInput
On entry: the number of variables.
4: – Nag_BooleanInput
On entry: specifies whether a repeatable or non-repeatable sequence of points are to be generated.
5: – const doubleInput
On entry: the lower bounds on the variables. These may be used to ensure that the starting points generated in some sense ‘cover’ the region, but there is no requirement that a starting point be feasible.
6: – const doubleInput
On entry: the upper bounds on the variables. (See bl.)
7: – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to start.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void *. Before calling e05usc you may allocate memory and initialize these pointers with various quantities for use by start when called from e05usc (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
On exit: if you set mode to a negative value then e05usc will terminate immediately with NE_USER_STOP. Provided fail is not NAGERR_DEFAULT on entry to e05usc, fail will contain this value of mode.
Note:start should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e05usc. If your code inadvertently does return any NaNs or infinities, e05usc is likely to produce unexpected results.
16: – Nag_BooleanInput
On entry: is passed as an argument to start and may be used to initialize a random number generator to a repeatable, or non-repeatable, sequence. See Section 9 for more detail.
17: – IntegerInput
On entry: the number of solutions to be returned. The function saves up to nb local minima ordered by increasing value of the final objective function. If the defining criterion for ‘best solution’ is only that the value of the objective function is as small as possible then nb should be set to . However, if you want to look at other solutions that may have desirable properties then setting will produce nb local minima, ordered by increasing value of their objective functions at the minima.
Constraint:
.
18: – doubleOutput
On exit: contains the value of the objective function at the final iterate for the th solution.
19: – doubleOutput
Note: the dimension, dim, of the array f
must be at least
.
where appears in this document, it refers to the array element
.
On exit: contains the value of the th function at the final iterate, for ,
for the th solution, for .
20: – doubleOutput
Note: the dimension, dim, of the array fjac
must be at least
.
where appears in this document, it refers to the array element .
On exit: for the th returned solution, the Jacobian matrix of the functions at the final iterate, i.e.,
contains the partial derivative of the th function with respect to the th variable, for , and . (See also the discussion of argument fjsl under objfun.)
21: – IntegerInput
On entry:
the first dimension of the
matrix FJAC as stored in the array fjac.
Constraint:
.
22: – IntegerInput
On entry: the second dimension of the matrix FJAC as stored in the array fjac.
Constraint:
.
23: – IntegerOutput
On exit: contains the number of major iterations performed to obtain the th solution. If less than nb solutions are returned then contains the number of starting points that have resulted in a converged solution. If this is close to npts then this might be indicative that fewer than nb local minima exist.
24: – doubleOutput
Note: the dimension, dim, of the array c
must be at least
.
where appears in this document, it refers to the array element
.
On exit: if ,
contains the value of the th nonlinear constraint function at the final iterate, for the th solution, for .
If , c is not referenced and may be specified as NULL.
25: – IntegerInput
On entry: the first dimension of C as stored in the array c.
Constraint:
.
26: – doubleOutput
Note: the dimension, dim, of the array cjac
must be at least
.
where appears in this document, it refers to the array element .
On exit: if , cjac contains the Jacobian matrices of the nonlinear constraint functions at the final iterate for each of the returned solutions, i.e.,
contains the partial derivative of the th constraint function with respect to the th variable, for and , for the th solution. (See the discussion of argument cjsl under confun.)
If , cjac is not referenced and may be specified as NULL.
27: – IntegerInput
On entry:
the first dimension of the
matrix CJAC as stored in the array cjac.
Constraint:
.
28: – IntegerInput
On entry: the second dimension of the matrix CJAC as stored in the array cjac.
Constraint:
if , .
29: – doubleOutput
Note: the dimension, dim, of the array clamda
must be at least
.
where appears in this document, it refers to the array element
.
On exit: the values of the QP multipliers from the last QP subproblem solved for the th solution. should be non-negative if and non-positive if .
30: – IntegerInput
On entry: the stride separating matrix row elements in the array clamda.
Constraint:
.
31: – IntegerOutput
Note: the dimension, dim, of the array istate
must be at least
.
where appears in this document, it refers to the array element
.
On exit: contains the status of the constraints in the QP working set for the th solution. The significance of each possible value of is as follows:
Meaning
The constraint is satisfied to within the feasibility tolerance, but is not in the QP working set.
This inequality constraint is included in the QP working set at its lower bound.
This inequality constraint is included in the QP working set at its upper bound.
This constraint is included in the QP working set as an equality. This value of istate can occur only when .
32: – IntegerInput
On entry: the stride separating matrix row elements in the array istate.
Constraint:
.
33: – IntegerCommunication Array
34: – doubleCommunication Array
The arrays iopts and opts MUST NOT be altered between calls to any of the functions e05uscande05zkc.
35: – Nag_Comm *
The NAG communication argument (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
36: – IntegerOutput
On exit: if NE_NOERROR, contains one of , or .
The final iterate satisfies the first-order Kuhn–Tucker conditions (see Section 11 in e04wdc) to the accuracy requested, but the sequence of iterates has not yet converged. The local optimizer was terminated because no further improvement could be made in the merit function (see Section 9.2).
does not satisfy the first-order Kuhn–Tucker conditions (see Section 11) and no improved point for the merit function (see Section 9.2) could be found during the final linesearch.
This sometimes occurs because an overly stringent accuracy has been requested, i.e., the value of the optional parameter (, where is the value of the optional parameter (, where is the machine precision)) is too small.
As usual denotes success.
If NW_SOME_SOLUTIONS on exit, then not all nb solutions have been found, and contains the number of solutions actually found.
37: – NagError *Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
NE_BAD_PARAM
On entry, argument had an illegal value.
NE_BOUND
On entry, : .
Constraint: , for all .
NE_DERIV_ERRORS
The user-supplied derivatives of the objective function and/or nonlinear constraints appear to be incorrect.
Large errors were found in the derivatives of the objective function and/or nonlinear constraints. This value of fail.code will occur if the verification process indicated that at least one gradient or Jacobian element had no correct figures. You should refer to or enable the printed output to determine which elements are suspected to be in error.
As a first-step, you should check that the code for the objective and constraint values is correct – for example, by computing the function at a point where the correct value is known. However, care should be taken that the chosen point fully tests the evaluation of the function. It is remarkable how often the values or are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless.
Gradient checking will be ineffective if the objective function uses information computed by the constraints, since they are not necessarily computed before each function evaluation.
Errors in programming the function may be quite subtle in that the function value is ‘almost’ correct. For example, the function may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the function depends. A common error on machines where numerical calculations are usually performed in double precision is to include even one single precision constant in the calculation of the function; since some compilers do not convert such constants to double precision, half the correct figures may be lost by such a seemingly trivial error.
NE_INITIALIZATION
Failed to initialize optional parameter arrays.
NE_INT
On entry, .
Constraint: .
On entry, .
Constraint: .
On entry, .
Constraint: .
On entry, .
Constraint: .
NE_INT_2
On entry, and .
Constraint: .
On entry, and .
Constraint: .
On entry, and .
Constraint: .
On entry, and .
Constraint: .
On entry, and .
Constraint: .
On entry, and .
Constraint: .
On entry, and .
Constraint: .
NE_INT_3
On entry, , and .
Constraint: if , .
NE_INT_4
On entry, , , and .
Constraint: .
On entry, , , and .
Constraint: .
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_LIN_NOT_FEASIBLE
e05usc has terminated without finding any solutions. The majority of calls to the local optimizer have failed to find a feasible point for the linear constraints and bounds, which means that either no feasible point exists for the given value of the optional parameter (default value , where is the machine precision), or no feasible point could be found in the number of iterations specified by the optional parameter . You should check that there are no constraint redundancies. If the data for the constraints are accurate only to an absolute precision , you should ensure that the value of the optional parameter is greater than . For example, if all elements of are of order unity and are accurate to only three decimal places, should be at least .
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_NO_SOLUTION
e05usc has failed to find any solutions. The majority of local optimizations have failed because the limiting number of iterations have been reached.
NE_NONLIN_NOT_FEASIBLE
e05usc has failed to find any solutions. The majority of local optimizations could not find a feasible point for the nonlinear constraints. The problem may have no feasible solution. This behaviour will occur if there is no feasible point for the nonlinear constraints. (However, there is no general test that can determine whether a feasible point exists for a set of nonlinear constraints.)
NE_USER_STOP
User terminated computation from start procedure: .
NW_SOME_SOLUTIONS
Only solutions obtained.
Not all nb solutions have been found. contains the number actually found.
7Accuracy
If NE_NOERROR on exit and the value of , then the vector returned in the array x for solution is an estimate of the solution to an accuracy of approximately
.
8Parallelism and Performance
e05usc is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library. In these implementations, this function may make calls to the user-supplied functions from within an OpenMP parallel region. Thus OpenMP pragmas within the user functions can only be used if you are compiling the user-supplied function and linking the executable in accordance with the instructions in the Users' Note for your implementation. You must also ensure that you use the NAG communication argument comm in a thread safe manner, which is best achieved by only using it to supply read-only data to the user functions.
e05usc makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
You should be wary of requesting much intermediate output from the local optimizer, since large volumes may be produced if npts is large.
In computing the default set of starting points, e05usc
smakes use of the NAG quasi-random Sobol generator (g05ylcandg05ymc). If
NULLFN
is used as the actual argument for start and then a randomly chosen value for iskip is used, otherwise iskip is set to . If repeat1 is set to Nag_FALSE and the program is executed several times, each time producing the same best answer, then there is increased probability that this answer is a global minimum. However, if it is important that identical results be obtained on successive runs, then repeat1 should be set to Nag_TRUE.
9.1Description of the Printed Output
This section describes the intermediate printout and final printout that may be produced by e05usc. The intermediate printout is a subset of the monitoring information produced by the function at every iteration (see Section 13). You can control the level of printed output (see the description of the optional parameters and ). Note that the intermediate printout and final printout are produced only if or .
The following line of summary output ( characters) is produced at every major iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj
is the major iteration count.
Mnr
is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see Section 11). Note that Mnr may be greater than the optional parameter if some iterations are required for the feasibility phase.
Step
is the step taken along the computed search direction. On reasonably well-behaved local problems, the unit step (i.e., ) will be taken as the solution is approached.
Merit Function
is the value of the augmented Lagrangian merit function
(12) in e04ufc
at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters
(see Section 11 in e04wdc).
As the solution is approached, Merit Function will converge to the value of the objective function at the solution.
If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or
the local optimizer terminates. Repeated failures will prevent a feasible point being found for the nonlinear constraints.
If there are no nonlinear constraints present (i.e., ) then this entry contains Objective, the value of the objective function . The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints.
Norm Gz
is , the Euclidean norm of the projected gradient
(see Section 11 in e04wdc).
Norm Gz will be approximately zero in the neighbourhood of a solution.
Violtn
is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution.
Cond Hz
is a lower bound on the condition number of the projected Hessian approximation
(; see
(6) and (11) in e04ufc). The larger this number, the more difficult the local problem.
M
is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite
(see Section 11 in e04wdc).
I
is printed if the QP subproblem has no feasible point.
C
is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that is close to a Kuhn–Tucker point
(see Section 11 in e04wdc).
L
is printed if the linesearch has produced a relative change in greater than the value defined by the optional parameter . If this output occurs frequently during later iterations of the run, optional parameter should be set to a larger value.
R
is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, is modified so that its diagonal condition estimator is bounded.
The following line of summary output ( characters) is produced at every minor iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Itn
is the iteration count.
Step
is the step taken along the computed search direction. If a constraint is added during the current iteration (i.e., Jadd is positive), Step will be the step to the nearest constraint. During the optimality phase, the step can be greater than only if the factor is singular.
(See Section 11.)
Ninf
is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
Sinf/Objective
is the value of the current objective function. If is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If is feasible, Objective is the value of the objective function of the QP subproblem. The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
During the optimality phase the value of the objective function will be nonincreasing. During the feasibility phase the number of constraint infeasibilities will not increase until either a feasible point is found or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
Norm Gz
is , the Euclidean norm of the reduced gradient with respect to . During the optimality phase, this norm will be approximately zero after a unit step.
(See Section 11.)
The final printout includes a listing of the status of every variable and constraint. The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
Varbl
gives the name (V) and index , for , of the variable.
State
gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the , State will be ++ or -- respectively.
(The latter situation can occur only when there is no feasible point for the bounds and linear constraints.)
A key is sometimes printed before State.
A
Alternative optimum possible. The variable is active at one of its bounds, but its Lagrange multiplier is essentially zero. This means that if the variable were allowed to start moving away from its bound then there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange multipliers might also change.
D
Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds.
I
Infeasible. The variable is currently violating one of its bounds by more than the .
Value
is the value of the variable at the final iteration.
Lower Bound
is the lower bound specified for the variable. None indicates that .
Upper Bound
is the upper bound specified for the variable. None indicates that .
Lagr Mult
is the Lagrange multiplier for the associated bound. This will be zero if State is FR unless and , in which case the entry will be blank. If is optimal, the multiplier should be non-negative if State is LL and non-positive if State is UL.
Slack
is the difference between the variable Value and the nearer of its (finite) bounds and . A blank entry indicates that the associated variable is not bounded (i.e., and ).
The meaning of the printout for linear and nonlinear constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, and are replaced by and respectively, and with the following changes in the heading:
L Con
gives the name (L) and index , for , of the linear constraint.
N Con
gives the name (N) and index (), for , of the nonlinear constraint.
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.
10Example
This example is based on Problem 57 in Hock and Schittkowski (1981) and involves the minimization of the sum of squares function
where
and
subject to the bounds
to the general linear constraint
and to the nonlinear constraint
The optimal solution (to five figures) is
and . The nonlinear constraint is active at the solution.
e05usc implements a
Sequential Quadratic Programming (SQP)
method incorporating an augmented Lagrangian merit function and a BFGS (Broyden–Fletcher–Goldfarb–Shanno) quasi-Newton approximation to the Hessian of the Lagrangian, and is based on e04wdc. The documents for e04ufcande04wdc should be consulted for details of the method.
12Optional Parameters
Several optional parameters in e05usc define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e05usc these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters. The following is a list of the optional parameters available and a full description of each optional parameter is provided in Section 12.1.
Optional parameters may be specified by calling e05zkc before a call to e05usc. Before calling e05usc, the optional parameter arrays iopts and opts
MUST
be initialized for use with e05usc by calling e05zkc with optstr set to ‘Initialize = e05usc’.
All optional parameters not specified are set to their default values. Optional parameters specified are unaltered by e05usc (unless they define invalid values) and so remain in effect for subsequent calls to e05usc.
12.1Description of the Optional Parameters
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
the keywords, where the minimum abbreviation of each keyword is underlined (if no characters of an optional qualifier are underlined, the qualifier may be omitted)
a parameter value,
where the letters , and denote options that take character, integer and real values respectively
the default value, where the symbol is a generic notation for machine precision (see X02AJC), and denotes the relative precision of the objective function , and signifies the value of
Keywords and character values are case insensitive, however they must be separated by at least one whitespace.
Optional parameters used to specify files have type Nag_FileID (see Section 3.1.1 in the Introduction to the NAG Library CL Interface). This ID value must either be set to (the default value) in which case there will be no output, or will be as returned by a call of x04acc.
For e05usc the maximum length of the argument cvalue used by e05zlc is .
Central Difference Interval
Default values are computed
If the algorithm switches to central differences because the forward-difference approximation is not sufficiently accurate, the value of is used as the difference interval for every element of . The switch to central differences is indicated by C at the end of each line of intermediate printout produced by the major iterations (see Section 9.2). The use of finite differences is discussed further under the optional parameter .
If you supply a value for this optional parameter, a small value between and is appropriate.
Crash Tolerance
Default
This value is used when the local minimizer selects an initial working set. If , the initial working set will include (if possible) bounds or general inequality constraints that lie within of their bounds. In particular, a constraint of the form will be included in the initial working set if . If or , the default value is used.
Defaults
This special keyword is used to reset all optional parameters to their default values, and any random state stored in state will be destroyed.
Any option value given with this keyword will be ignored. This optional parameter cannot be queried or got.
Derivative Level
Default
This parameter indicates which derivatives are provided in user-supplied functions objfun and confun. The possible choices for are the following.
Meaning
3
All elements of the objective gradient and the constraint Jacobian are provided.
2
All elements of the constraint Jacobian are provided, but some elements of the objective gradient are not specified.
1
All elements of the objective gradient are provided, but some elements of the constraint Jacobian are not specified.
0
Some elements of both the objective gradient and the constraint Jacobian are not specified.
The value should be used whenever possible, since e05usc is more reliable (and will usually be more efficient) when all derivatives are exact.
If or , e05usc will estimate the unspecified elements of the objective gradient, using finite differences. The computation of finite difference approximations usually increases the total run-time, since a call to objfun is required for each unspecified element. Furthermore, less accuracy can be attained in the solution (see Chapter 8 of Gill et al. (1981), for a discussion of limiting accuracy).
If or , e05usc will approximate unspecified elements of the constraint Jacobian. One call to confun is needed for each variable for which partial derivatives are not available. For example, if the Jacobian has the form
where ‘’ indicates an element provided by you and ‘?’ indicates an unspecified element,
the local minimizer
will call confun twice: once to estimate the missing element in column 2, and again to estimate the two missing elements in column . (Since columns and are known, they require no calls to confun.)
At times, central differences are used rather than forward differences, in which case twice as many calls to objfun and confun are needed. (The switch to central differences is not under your control.)
If or , the default value is used.
Difference Interval
Default values are computed
This option defines an interval used to estimate derivatives by finite differences in the following circumstances:
(a)For verifying the objective and/or constraint gradients (see the description of the optional parameter ).
(b)For estimating unspecified elements of the objective gradient or the constraint Jacobian.
In general, a derivative with respect to the th variable is approximated using the interval , where , with the first point feasible with respect to the bounds and linear constraints. If the functions are well scaled, the resulting derivative approximation should be accurate to . See Gill et al. (1981) for a discussion of the accuracy in finite difference approximations.
If a difference interval is not specified, a finite difference interval will be computed automatically for each variable by a procedure that requires up to six calls of confun and objfun for each element. This option is recommended if the function is badly scaled or you wish to have
the local minimizer
determine constant elements in the objective and constraint gradients (see the descriptions of confun and objfun in Section 5).
If you supply a value for this optional parameter, a small value between and is appropriate.
Feasibility Tolerance
Default
The scalar defines the maximum acceptable absolute violations in linear and nonlinear constraints at a ‘feasible’ point; i.e., a constraint is considered satisfied if its violation does not exceed . If or , the default value is used. Using this keyword sets both optional parameters and to , if . (Additional details are given under the descriptions of these optional parameters.)
Function Precision
Default
This parameter defines , which is intended to be a measure of the accuracy with which the problem functions and can be computed. If or , the default value is used.
The value of should reflect the relative precision of ; i.e., acts as a relative precision when is large, and as an absolute precision when is small. For example, if is typically of order and the first six significant digits are known to be correct, an appropriate value for would be . In contrast, if is typically of order and the first six significant digits are known to be correct, an appropriate value for would be . The choice of can be quite complicated for badly scaled problems; see Chapter 8 of Gill et al. (1981) for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However, when the accuracy of the computed function values is known to be significantly worse than full precision, the value of should be large enough so that e05usc will not attempt to distinguish between function values that differ by less than the error inherent in the calculation.
Infinite Bound Size
Default
This defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to will be regarded as (and similarly any lower bound less than or equal to will be regarded as ).
Constraint:
.
Infinite Step Size
Default
If , specifies the magnitude of the change in variables that is treated as a step to an unbounded solution. If the change in during an iteration would exceed the value of , the objective function is considered to be unbounded below in the feasible region. If , the default value is used.
Line Search Tolerance
Default
The value () controls the accuracy with which the step taken during each iteration approximates a minimum of the merit function along the search direction (the smaller the value of , the more accurate the linesearch). The default value requests an inaccurate search, and is appropriate for most problems, particularly those with any nonlinear constraints.
If there are no nonlinear constraints, a more accurate search may be appropriate when it is desirable to reduce the number of major iterations – for example, if the objective function is cheap to evaluate, or if a substantial number of derivatives are unspecified. If or , the default value is used.
LinearFeasibility Tolerance
Default
Nonlinear Feasibility Tolerance
Default or
The default value of is if or , and otherwise.
The scalars and define the maximum acceptable absolute violations in linear and nonlinear constraints at a ‘feasible’ point; i.e., a linear constraint is considered satisfied if its violation does not exceed , and similarly for a nonlinear constraint and . If or , the default value is used, for .
On entry to
the local optimizer
an iterative procedure is executed in order to find a point that satisfies the linear constraints and bounds on the variables to within the tolerance . All subsequent iterates will satisfy the linear constraints to within the same tolerance (unless is comparable to the finite difference interval).
For nonlinear constraints, the feasibility tolerance defines the largest constraint violation that is acceptable at an optimal point. Since nonlinear constraints are generally not satisfied until the final iterate, the value of optional parameter acts as a partial termination criterion for the iterative sequence generated by
the local minimizer
(see the discussion of optional parameter ).
These tolerances should reflect the precision of the corresponding constraints. For example, if the variables and the coefficients in the linear constraints are of order unity, and the latter are correct to about decimal digits, it would be appropriate to specify as .
List
Nolist
Default
Optional parameter enables printing of each optional parameter specification as it is supplied. suppresses this printing.
Major Iteration Limit
Default
Iteration Limit
Iters
Itns
The value of specifies the maximum number of major iterations allowed before termination of each local subproblem. Setting and means that the workspace needed by each local minimization will be computed and printed, but no iterations will be performed. If , the default value is used.
Major Print Level
Default
Print Level
The value of controls the amount of printout produced by the major iterations of e05usc, as indicated below. A detailed description of the printed output is given in Section 9.2 (summary output at each major iteration and the final solution) and Section 13 (monitoring information at each major iteration). (See also the description of the optional parameter .)
The following printout is sent
to stdout:
Output
No output.
For the other values described below, the arguments used by
the local minimizer
are displayed in addition to intermediate and final output.
Output
The final solution only.
One line of summary output ( characters; see Section 9.2) for each major iteration (no printout of the final solution).
The final solution and one line of summary output for each major iteration.
The following printout is sent to the
file associated with the FileID
defined by the optional parameter :
Output
No output.
One long line of output ( characters; see Section 13) for each major iteration (no printout of the final solution).
At each major iteration, the objective function, the Euclidean norm of the nonlinear constraint violations, the values of the nonlinear constraints (the vector ), the values of the linear constraints (the vector ), and the current values of the variables (the vector ).
At each major iteration, the diagonal elements of the matrix associated with the factorization
(5) in e04ufc
(see Section 11 in e04wdc)
of the QP working set, and the diagonal elements of , the triangular factor of the transformed and reordered Hessian
(6) in e04ufc
(see Section 11 in e04wdc).
Minor Iteration Limit
Default
The value of specifies the maximum number of iterations for finding a feasible point with respect to the bounds and linear constraints (if any). The value of also specifies the maximum number of minor iterations for the optimality phase of each QP subproblem. If , the default value is used.
Minor Print Level
Default
The value of controls the amount of printout produced by the minor iterations of e05usc (i.e., the iterations of the quadratic programming algorithm), as indicated below. A detailed description of the printed output is given in Section 9.2 (summary output at each minor iteration and the final QP solution) and Section 13 (monitoring information at each minor iteration). (See also the description of the optional parameter .)
The following printout is sent to stdout:
Output
No output.
The final QP solution only.
One line of summary output ( characters; see Section 9.2) for each minor iteration (no printout of the final QP solution).
The final QP solution and one line of summary output for each minor iteration.
The following printout is sent to the
file associated with the FileID
defined by the optional parameter :
Output
No output.
One long line of output ( characters; see Section 9.2) for each minor iteration (no printout of the final QP solution).
At each minor iteration, the current estimates of the QP multipliers, the current estimate of the QP search direction, the QP constraint values, and the status of each QP constraint.
At each minor iteration, the diagonal elements of the matrix associated with the factorization
(5) in e04ufc
(see Section 11 in e04wdc)
of the QP working set, and the diagonal elements of the Cholesky factor of the transformed Hessian
(6) in e04ufc
(see Section 11 in e04wdc).
Monitoring File
Default
(See Section 3.1.1 in the Introduction to the NAG Library CL Interface for further information on NAG data types.)
is of the type Nag_FileID and is obtained by a call to
x04acc.
If and or and , monitoring information produced by e05usc at every iteration is sent to a file
with ID .
If and/or and , no monitoring information is produced.
Optimality Tolerance
Default
The argument () specifies the accuracy to which you wish the final iterate to approximate a solution of
each local
problem. Broadly speaking, indicates the number of correct figures desired in the objective function at the solution. For example, if is and
a local minimization
terminates successfully, the final value of should have approximately six correct figures. If or , the default value is used.
The local optimizer
will terminate successfully if the iterative sequence of values is judged to have converged and the final point satisfies the first-order Kuhn–Tucker conditions
(see Section 11 in e04wdc)
The sequence of iterates is considered to have converged at if
(2)
where is the search direction and the step length from
(3) in e04ufc.
An iterate is considered to satisfy the first-order conditions for a minimum if
(3)
and
(4)
where is the projected gradient
(see Section 11 in e04wdc),
is the gradient of with respect to the free variables, is the violation of the th active nonlinear constraint, and is the .
Out_Level
Default
This option defines the amount of extra information to be sent to
a file associated with
. The possible choices for are the following:
Meaning
0
No extra output.
1
Updated solutions only. This is useful during long runs to observe progress.
2
Successful start points only. This is useful to save the starting points that gave rise to the final solution.
3
Both updated solutions and successful start points.
Punch Unit
Default
This option allows you to send information arising from an appropriate setting of to be sent to
a file with an integer identifier . must be obtained by a call to x04acc where is the third argument to x04acc.
Start Objective Check At Variable
Default
Stop Objective Check At Variable
Default
Start Constraint Check At Variable
Default
Stop Constraint Check At Variable
Default
These keywords take effect only if . They may be used to control the verification of gradient elements computed by objfun and/or Jacobian elements computed by confun. For example, if the first elements of the objective gradient appeared to be correct in an earlier run, so that only element remains questionable, it is reasonable to specify . If the first variables appear linearly in the objective, so that the corresponding gradient elements are constant, the above choice would also be appropriate.
If or , the default value is used, for . If or , the default value is used, for .
Step Limit
Default
If specifies the maximum change in variables at the first step of the linesearch. In some cases, such as or , even a moderate change in the elements of can lead to floating-point overflow. The parameter is, therefore, used to encourage evaluation of the problem functions at meaningful points. Given any major iterate , the first point at which and are evaluated during the linesearch is restricted so that
The linesearch may go on and evaluate and at points further from if this will result in a lower value of the merit function (indicated by L at the end of each line of output produced by the major iterations; see
Section 9.2).
If L is printed for most of the iterations, should be set to a larger value.
Wherever possible, upper and lower bounds on should be used to prevent evaluation of nonlinear functions at wild values. The default value should not affect progress on well-behaved functions, but values such as or may be helpful when rapidly varying functions are present. If a small value of is selected, a good starting point may be required. An important application is to the class of nonlinear least squares problems. If , the default value is used.
Verify Level
Default
Verify
Verify Constraint Gradients
Verify Gradients
Verify Objective Gradients
These keywords refer to finite difference checks on the gradient elements computed by objfun and confun. The possible choices for are as follows:
Meaning
No checks are performed.
Only a ‘cheap’ test will be performed.
Individual gradient elements will also be checked using a reliable (but more expensive) test.
It is possible to specify to in several ways. For example, the nonlinear objective gradient (if any) will be verified if either or is specified. The constraint gradients will be verified if or or is specified. Similarly, the objective and the constraint gradients will be verified if or or is specified.
If , gradients will be verified at the first point that satisfies the linear constraints and bounds.
If , only a ‘cheap’ test will be performed, requiring one call to objfun and (if appropriate) one call to confun.
If , a more reliable (but more expensive) check will be made on individual gradient elements, within the ranges specified by the and keywords. A result of the form OK or BAD? is printed by e05usc to indicate whether or not each element appears to be correct. If a gradient element is determined to be extremely poor (i.e., if it appears to have no significant digits of accuracy at all) then e05usc will also exit with an error indicator in argument fail.
If , the action is the same as for , except that it will take place at the user-specified initial value of .
If or or , the default value is used.
We suggest that be used whenever a new function function is being developed.
13Description of Monitoring Information
This section describes the long line of output ( characters) which forms part of the monitoring information produced by e05usc. (See also the description of the optional parameters , and .) You can control the level of printed output.
When and , the following line of output is produced at every major iteration of e05usc on the
file
specified by . In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj
is the major iteration count.
Mnr
is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see Section 11). Note that Mnr may be greater than the optional parameter if some iterations are required for the feasibility phase.
Step
is the step taken along the computed search direction. On reasonably well-behaved local problems, the unit step (i.e., ) will be taken as the solution is approached.
Nfun
is the cumulative number of evaluations of the objective function needed for the linesearch. Evaluations needed for the estimation of the gradients by finite differences are not included. Nfun is printed as a guide to the amount of work required for the linesearch.
Merit Function
is the value of the augmented Lagrangian merit function
(12) in e04ufc
at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters
(see Section 11 in e04wdc).
As the solution is approached, Merit Function will converge to the value of the objective function at the solution.
If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or
the local optimizer terminates. Repeated failures will prevent a feasible point being found for the nonlinear constraints.
If there are no nonlinear constraints present (i.e., ) then this entry contains Objective, the value of the objective function . The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints.
Norm Gz
is , the Euclidean norm of the projected gradient
(see Section 11 in e04wdc)
Norm Gz will be approximately zero in the neighbourhood of a solution.
Violtn
is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution.
Nz
is the number of columns of
(see Section 11 in e04wdc).
The value of Nz is the number of variables minus the number of constraints in the predicted active set; i.e., .
Bnd
is the number of simple bound constraints in the predicted active set.
Lin
is the number of general linear constraints in the predicted working set.
Nln
is the number of nonlinear constraints in the predicted active set (not printed if ncnln is zero).
Penalty
is the Euclidean norm of the vector of penalty parameters used in the augmented Lagrangian merit function (not printed if ncnln is zero).
Cond H
is a lower bound on the condition number of the Hessian approximation .
Cond Hz
is a lower bound on the condition number of the projected Hessian approximation
(; see
(6) in e04ufc). The larger this number, the more difficult the local problem.
Cond T
is a lower bound on the condition number of the matrix of predicted active constraints.
Conv
is a three-letter indication of the status of the three convergence tests (2)–(4) defined in the description of the optional parameter . Each letter is T if the test is satisfied and F otherwise. The three tests indicate whether:
(i)the sequence of iterates has converged;
(ii)the projected gradient (Norm Gz) is sufficiently small; and
(iii)the norm of the residuals of constraints in the predicted active set (Violtn) is small enough.
If any of these indicators is F
for a successful local minimization
you should check the solution carefully.
M
is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite
(see Section 11 in e04wdc).
I
is printed if the QP subproblem has no feasible point.
C
is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that is close to a Kuhn–Tucker point
(see Section 11 in e04wdc).
L
is printed if the linesearch has produced a relative change in greater than the value defined by the optional parameter . If this output occurs frequently during later iterations of the run, optional parameter should be set to a larger value.
R
is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, is modified so that its diagonal condition estimator is bounded.
When and , the following line of output is produced at every minor iteration of e05usc on the file specified by . In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Itn
is the iteration count.
Jdel
is the index of the constraint deleted from the working set. If Jdel is zero, no constraint was deleted.
Jadd
is the index of the constraint added to the working set. If Jadd is zero, no constraint was added.
Step
is the step taken along the computed search direction. If a constraint is added during the current iteration (i.e., Jadd is positive), Step will be the step to the nearest constraint. During the optimality phase, the step can be greater than only if the factor is singular.
Ninf
is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
Sinf/Objective
is the value of the current objective function. If is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If is feasible, Objective is the value of the objective function of
the QP subproblem.
The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
During the optimality phase the value of the objective function will be nonincreasing. During the feasibility phase the number of constraint infeasibilities will not increase until either a feasible point is found or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
Bnd
is the number of simple bound constraints in the current working set.
Lin
is the number of general linear constraints in the current working set.
Art
is the number of artificial constraints in the working set, i.e., the number of columns of
(see Section 11).
Zr
is the number of columns of
(see Section 11).
Zr is the dimension of the subspace in which the objective function is currently being minimized. The value of Zr is the number of variables minus the number of constraints in the working set; i.e., .
The value of , the number of columns of
(see Section 11)
can be calculated as . A zero value of implies that lies at a vertex of the feasible region.
Norm Gz
is , the Euclidean norm of the reduced gradient with respect to . During the optimality phase, this norm will be approximately zero after a unit step.
Norm Gf
is the Euclidean norm of the gradient function with respect to the free variables, i.e., variables not currently held at a bound.
Cond T
is a lower bound on the condition number of the working set.
Cond Rz
is a lower bound on the condition number of the triangular factor (the first Zr rows and columns of the factor ). If the estimated rank of the data matrix is zero then Cond Rz is not printed.