Note:this routine usesoptional parametersto define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
e04stf is a solver from the NAG optimization modelling suite for constrained large-scale Nonlinear Programming (NLP) problems.
It is an interior point method optimization solver based on the IPOPT software package.
The routine may be called by the names e04stf or nagf_opt_handle_solve_ipopt.
3Description
e04stf is typically used to solve the following nonlinear programming problem
where
is the number of the decision variables,
is the number of the nonlinear constraints and , and are -dimensional vectors,
is the number of the quadratic constraints,
is the number of the linear constraints and is a matrix, and are -dimensional vectors,
there are box constraints and and are -dimensional vectors.
The objective can be specified in a number of ways: e04ref for a dense linear function, e04rff
(and e04rsf or e04rtf)
for a sparse linear or quadratic function, e04rtf is used to provide the quadratic
function in factorized form, and e04rgf for a general nonlinear function. In the last case, objfun and objgrd will be used to compute values and gradients of the objective function. Variable box bounds can be specified with e04rhf. The special case of linear constraints is handled by e04rjf,
quadratic constraints are handled by e04rsf and e04rtf
while general nonlinear constraints are specified by e04rkf (all can be specified). Again, in the last case, confun and congrd will be used to compute values and gradients of the nonlinear constraint functions.
Finally, if it is viable to calculate second derivatives, the sparsity structure of the second partial derivatives of a general nonlinear objective and/or of any general nonlinear constraints is specified by e04rlf and the values of these derivatives themselves will be computed by user-supplied hess. While there is an option (see Hessian Mode) that forces internal approximation of second derivatives, no such option exists for first derivatives which must be computed accurately. If e04rlf has been called and hess is used to calculate values for second derivatives, both the nonlinear objective and all the nonlinear constraints must be included; it is not possible to provide a subset of these.
If the problem has only linear or quadratic objective and constraints, then hess is never called since the required Hessian information is already provided by the calls to
e04ref,e04rff,e04rjf,e04rsfande04rtf.
If e04rlf is not called, then internal approximation of second derivatives will take place.
See Section 3.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.
3.1Structure of the Lagrange Multipliers
For a problem consisting of variable bounds, linear constraints,
quadratic constraints,
and
nonlinear constraints, the number of Lagrange multipliers, and consequently the correct value for nnzu, will be . The order these will be found in the u array is
where the and subscripts refer to lower and upper bounds, respectively, and the variable bound constraint multipliers come first (if present, i.e., if e04rhf was called), followed by the linear constraint multipliers (if present, i.e., if e04rjf was called),
followed by the quadratic constraint multipliers (if present, i.e., if e04rsf or e04rtf were called),
and the nonlinear constraint multipliers (if present, i.e., if e04rkf was called).
Significantly nonzero values for any of these, after the solver has terminated, indicates that the corresponding constraint is active. Significance is judged in the first instance by the relative scale of any value compared to the smallest among them.
4References
Byrd R H, Gilbert J Ch and Nocedal J (2000) A trust region method based on interior point techniques for nonlinear programming Mathematical Programming89 149–185
Byrd R H, Liu G and Nocedal J (1997) On the local behavior of an interior point method for nonlinear programming Numerical Analysis (eds D F Griffiths and D J Higham) Addison–Wesley
Conn A R, Gould N I M, Orban D and Toint Ph L (2000) A primal-dual trust-region algorithm for non-convex nonlinear programming Mathematical Programming87 (2) 215–249
Conn A R, Gould N I M and Toint Ph L (2000) Trust Region Methods SIAM, Philadephia
Fiacco A V and McCormick G P (1990) Nonlinear Programming: Sequential Unconstrained Minimization Techniques SIAM, Philadelphia
Gould N I M, Orban D, Sartenaer A and Toint Ph L (2001) Superlinear convergence of primal-dual interior point algorithms for nonlinear programming SIAM Journal on Optimization11 (4) 974–1002
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems187 Springer–Verlag
Hogg J D and Scott J A (2010) An indefinite sparse direct solver for large problems on multicore machines RAL Technical Report. RAL-TR-2010-011
Hogg J D and Scott J A (2011) HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems RAL Technical Report. RAL-TR-2011-024
Wächter A and Biegler L T (2006) On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming Mathematical Programming106(1) 25–57
Williams P and Lang B (2013) A framework for the Algorithm: theory and implementation SIAM J. Sci. Comput.35 740–766
Yamashita H (1998) A globally convergent primal-dual interior-point method for constrained optimization Optimization Methods and Software10 443–469
5Arguments
1: – Type (c_ptr)Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04raf) and to hold a problem formulation compatible with e04stf. It must not be changed between calls to the NAG optimization modelling suite.
2: – Subroutine, supplied by the NAG Library or the user.External Procedure
objfun must calculate the value of the nonlinear objective function at a specified value of the -element vector of variables. If there is no nonlinear objective (e.g., e04ref,e04rff,e04rsfore04rtf was called to define a linear or quadratic objective function), objfun will never be called by e04stf and objfun may be the dummy routine e04stv. (e04stv is included in the NAG Library.)
On entry: , the current number of decision variables in the model.
2: – Real (Kind=nag_wp) arrayInput
On entry: the vector of variable values at which the objective function is to be evaluated.
3: – Real (Kind=nag_wp)Output
On exit: the value of the objective function at .
4: – IntegerInput/Output
On entry: a non-negative value.
On exit: must be set to a value describing the action to be taken by the solver on return from objfun. Specifically, if the value is negative, then the value of fx will be discarded and the solver will either attempt to find a different trial point or terminate immediately with ; otherwise, the solver will proceed normally.
5: – Integer arrayUser Workspace
6: – Real (Kind=nag_wp) arrayUser Workspace
7: – Type (c_ptr)User Workspace
objfun is called with the arguments iuser, ruser and cpuser as supplied to e04stf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to objfun.
objfun must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04stf is called. Arguments denoted as Input must not be changed by this procedure.
Note:objfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04stf. If your code inadvertently does return any NaNs or infinities, e04stf is likely to produce unexpected results.
3: – Subroutine, supplied by the NAG Library or the user.External Procedure
objgrd must calculate the values of the nonlinear objective function gradients at a specified value of the -element vector of variables. If there is no nonlinear objective (e.g., e04ref,e04rff,e04rsfore04rtf was called to define a linear or quadratic objective function), objgrd will never be called by e04stf and objgrd may be the dummy subroutine e04stw included in the NAG Library.
On entry: , the current number of decision variables in the model.
2: – Real (Kind=nag_wp) arrayInput
On entry: the vector of variable values at which the objective function gradient is to be evaluated.
3: – IntegerInput
On entry: the number of nonzero elements in the sparse gradient vector of the objective function, as was set in a previous call to e04rgf.
4: – Real (Kind=nag_wp) arrayInput/Output
On entry: the elements should only be assigned and not referenced.
On exit: the values of the nonzero elements in the sparse gradient vector of the objective function, in the order specified by idxfd in a previous call to e04rgf. will be the gradient .
5: – IntegerInput/Output
On entry: a non-negative value.
On exit: must be set to a value describing the action to be taken by the solver on return from objgrd. Specifically, if the value is negative then
the value of fdx will be discarded and the solver will either attempt to find a different trial point or
will terminate immediately with ; otherwise, computations will continue.
6: – Integer arrayUser Workspace
7: – Real (Kind=nag_wp) arrayUser Workspace
8: – Type (c_ptr)User Workspace
objgrd is called with the arguments iuser, ruser and cpuser as supplied to e04stf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to objgrd.
objgrd must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04stf is called. Arguments denoted as Input must not be changed by this procedure.
Note:objgrd should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04stf. If your code inadvertently does return any NaNs or infinities, e04stf is likely to produce unexpected results.
4: – Subroutine, supplied by the NAG Library or the user.External Procedure
confun must calculate the values of the -element vector of nonlinear constraint functions at a specified value of the -element vector of variables. If there are no nonlinear constraints then confun will never be called by e04stf and it may be the dummy subroutine e04stx included in the NAG Library.
On entry: , the current number of decision variables in the model.
2: – Real (Kind=nag_wp) arrayInput
On entry: the vector of variable values at which the constraint functions are to be evaluated.
3: – IntegerInput
On entry: , the number of nonlinear constraints, as specified in an earlier call to e04rkf.
4: – Real (Kind=nag_wp) arrayOutput
On exit: the values of the nonlinear constraint functions at .
5: – IntegerInput/Output
On entry: a non-negative value.
On exit: must be set to a value describing the action to be taken by the solver on return from confun. Specifically, if the value is negative, then the value of gx will be discarded and the solver will either attempt to find a different trial point or terminate immediately with ; otherwise, the solver will proceed normally.
6: – Integer arrayUser Workspace
7: – Real (Kind=nag_wp) arrayUser Workspace
8: – Type (c_ptr)User Workspace
confun is called with the arguments iuser, ruser and cpuser as supplied to e04stf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to confun.
confun must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04stf is called. Arguments denoted as Input must not be changed by this procedure.
Note:confun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04stf. If your code inadvertently does return any NaNs or infinities, e04stf is likely to produce unexpected results.
5: – Subroutine, supplied by the NAG Library or the user.External Procedure
congrd must calculate the nonzero values of the sparse Jacobian of the nonlinear constraint functions at a specified value of the -element vector of variables. If there are no nonlinear constraints,
congrd will never be called by e04stf and congrd may be the dummy subroutine e04sty included in the NAG Library.)
On entry: , the current number of decision variables in the model.
2: – Real (Kind=nag_wp) arrayInput
On entry: the vector of variable values at which the Jacobian of the constraint functions is to be evaluated.
3: – IntegerInput
On entry: is the number of nonzero elements in the sparse Jacobian of the constraint functions, as was set in a previous call to e04rkf.
4: – Real (Kind=nag_wp) arrayInput/Output
On entry: the elements should only be assigned and not referenced.
On exit: the nonzero values of the Jacobian of the nonlinear constraints, in the order specified by irowgd and icolgd in an earlier call to e04rkf. will be the gradient
, where and .
5: – IntegerInput/Output
On entry: a non-negative value.
On exit: must be set to a value describing the action to be taken by the solver on return from congrd. Specifically, if the value is negative the solution of the current problem will terminate immediately with ; otherwise, computations will continue.
6: – Integer arrayUser Workspace
7: – Real (Kind=nag_wp) arrayUser Workspace
8: – Type (c_ptr)User Workspace
congrd is called with the arguments iuser, ruser and cpuser as supplied to e04stf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to congrd.
congrd must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04stf is called. Arguments denoted as Input must not be changed by this procedure.
Note:congrd should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04stf. If your code inadvertently does return any NaNs or infinities, e04stf is likely to produce unexpected results.
6: – Subroutine, supplied by the NAG Library or the user.External Procedure
hess must calculate the nonzero values of one of a set of second derivative quantities:
the Hessian of the Lagrangian function ,
the Hessian of the objective function ,
the Hessian of the th constraint function .
The value of argument idf determines which one of these is to be computed and this, in turn, is determined by earlier calls to e04rlf, when the nonzero sparsity structure of these Hessians was registered. Please note that it is not possible to only supply a subset of the Hessians (see ). If there were no calls to e04rlf, hess will never be called by e04stf and hess may be the dummy subroutine e04stz (e04stz is included in the NAG Library). In this case, the Hessian of the Lagrangian will be approximated by a limited-memory quasi-Newton method (L-BFGS).
On entry: , the current number of decision variables in the model.
2: – Real (Kind=nag_wp) arrayInput
On entry: the vector of variable values at which the Hessian functions are to be evaluated.
3: – IntegerInput
On entry: , the number of nonlinear constraints, as specified in an earlier call to e04rkf.
4: – IntegerInput
On entry: specifies the quantities to be computed in hx.
The values of the Hessian of the Lagrangian will be computed in hx. This will be the case if e04rlf has been called with idf of the same value.
The values of the Hessian of the objective function will be computed in hx. This will be the case if e04rlf has been called with idf of the same value.
The values of the Hessian of the constraint function with index idf will be computed in hx. This will be the case if e04rlf has been called with idf of the same value.
5: – Real (Kind=nag_wp)Input
On entry: if , the value of the quantity in the definition of the Hessian of the Lagrangian. Otherwise, sigma should not be referenced.
6: – Real (Kind=nag_wp) arrayInput
On entry: if , the values of the quantities in the definition of the Hessian of the Lagrangian. Otherwise, lambda should not be referenced.
7: – IntegerInput
On entry: the number of nonzero elements in the Hessian to be computed.
8: – Real (Kind=nag_wp) arrayInput/Output
On entry: the elements should only be assigned and not referenced.
On exit: the nonzero values of the requested Hessian evaluated at . For each value of idf, the ordering of nonzeros must follow the sparsity structure registered in the handle by earlier calls to e04rlf through the arguments irowh and icolh.
9: – IntegerInput/Output
On entry: a non-negative value.
On exit: must be set to a value describing the action to be taken by the solver on return from hess. Specifically, if the value is negative the solution of the current problem will terminate immediately with ; otherwise, computations will continue.
10: – Integer arrayUser Workspace
11: – Real (Kind=nag_wp) arrayUser Workspace
12: – Type (c_ptr)User Workspace
hess is called with the arguments iuser, ruser and cpuser as supplied to e04stf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to hess.
hess must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04stf is called. Arguments denoted as Input must not be changed by this procedure.
Note:hess should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04stf. If your code inadvertently does return any NaNs or infinities, e04stf is likely to produce unexpected results.
7: – Subroutine, supplied by the NAG Library or the user.External Procedure
monit is provided to enable you to monitor the progress of the optimization.
monit may be the dummy subroutine e04stu included in the NAG Library.
On entry: if , u holds the values of Lagrange multipliers (dual variables) for the constraints at the current iteration. See Section 3.1 for layout information.
5: – IntegerInput/Output
On entry: a non-negative value.
On exit: may be used to request the solver to stop immediately. Specifically, if the solver will terminate immediately with ; otherwise, the solver will proceed normally.
6: – Real (Kind=nag_wp) arrayInput
On entry: error measures and various indicators at the end of the current iteration as described in Section 9.1.
7: – Real (Kind=nag_wp) arrayInput
On entry: solver statistics at the end of the current iteration. It reports only the iteration count and
the number of backtracking trial steps taken. See Section 9.1.
8: – Integer arrayUser Workspace
9: – Real (Kind=nag_wp) arrayUser Workspace
10: – Type (c_ptr)User Workspace
monit is called with the arguments iuser, ruser and cpuser as supplied to e04stf. You should use the arrays iuser and ruser, and the data handle cpuser to supply information to monit.
monit must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which e04stf is called. Arguments denoted as Input must not be changed by this procedure.
Note:monit should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04stf. If your code inadvertently does return any NaNs or infinities, e04stf is likely to produce unexpected results.
8: – IntegerInput
On entry: , the current number of decision variables in the model.
9: – Real (Kind=nag_wp) arrayInput/Output
On entry: , the initial estimates of the variables .
On exit: the final values of the variables .
10: – IntegerInput
On entry: the number of Lagrange multipliers that are to be returned in array u.
If , u will not be referenced; otherwise, it needs to match the dimension as explained in Section 3.1.
Constraints:
;
if , .
11: – Real (Kind=nag_wp) arrayInput/Output
On entry: the input of u is reserved for future releases of the NAG Library and it is ignored at the moment.
Note: if , u holds Lagrange multipliers (dual variables) for the constraints. See Section 3.1 for layout information. If , u will not be referenced.
On exit: the final value of Lagrange multipliers .
12: – Real (Kind=nag_wp) arrayOutput
On exit: error measures and various indicators at the end of the final iteration as given in the list below:
Objective function value .
Constraint violation (primal infeasibility),
see (7).
Regularization term for the Hessian of the Lagrangian. This value is only available in monit, see Iteration log in
Section 9.1.
Step size for the dual variables.
This value is only available in monit, see Iteration log in
Section 9.1.
Step size for the primal variables.
This value is only available in monit, see Iteration log in
Section 9.1.
–
Reserved for future use.
13: – Real (Kind=nag_wp) arrayOutput
On exit: solver statistics at the end of the final iteration as given in the list below:
Number of the iterations.
Reserved for future use.
Number of backtracking trial steps.
Number of Hessian evaluations.
Number of objective gradient evaluations.
,
Reserved for future use.
Total wall clock time elapsed.
–
Reserved for future use.
Number of objective function evaluations.
Number of constraint function evaluations.
Number of constraint Jacobian evaluations.
–
Reserved for future use.
14: – Integer arrayUser Workspace
15: – Real (Kind=nag_wp) arrayUser Workspace
16: – Type (c_ptr)User Workspace
iuser, ruser and cpuser are not used by e04stf, but are passed directly to objfun, objgrd, confun, congrd, hess and monit and may be used to pass information to these routines. If you do not need to reference cpuser, it should be initialized to c_null_ptr.
17: – IntegerInput/Output
On entry: ifail must be set to , or to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of means that an error message is printed while a value of means that it is not.
If halting is not appropriate, the value or is recommended. If message printing is undesirable, then the value is recommended. Otherwise, the value is recommended since useful values can be provided in some output arguments even when on exit. When the value or is used it is essential to test the value of ifail on exit.
On exit: unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry or , explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
Note: in some cases e04stf may return useful information.
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been initialized by e04raf or it has been corrupted.
The problem is already being solved.
This solver does not support the model defined in the handle.
On entry, , expected .
Constraint: nvar must match the current number of variables of the model in the handle.
On entry, . Constraint: or .
On entry, . Constraint: no constraints present, so must be .
Either all of the constraint and objective Hessian structures must be defined or none (in which case, the Hessians will be approximated by a limited-memory quasi-Newton L-BFGS method). On entry, a nonlinear objective function has been defined but no objective Hessian sparsity structure has been defined through e04rlf.
On entry, a nonlinear constraint function has been defined but no constraint Hessian sparsity structure has been defined through e04rlf, for constraint number .
The dummy confun routine was called but the problem requires these values. Please provide a proper confun routine.
The dummy congrd routine was called but the problem requires these derivatives. Please provide a proper congrd routine.
The dummy hess routine was called but the problem requires these derivatives. Either change the optional parameter Hessian Mode or provide a proper hess routine.
The dummy objfun routine was called but the problem requires these values. Please provide a proper objfun routine.
The dummy objgrd routine was called but the problem requires these derivatives. Please provide a proper objgrd routine.
User requested termination during a monitoring step. inform was set to a negative value in monit.
Maximum number of iterations exceeded.
The solver terminated after an error in the step computation. This message is printed if the solver is unable to compute a search direction, despite several attempts to modify the iteration matrix. Usually, the value of the regularization parameter then becomes too large. One situation where this can happen is when values in the Hessian are invalid (NaN or Infinity). You can check whether this is true by using the Verify Derivatives option.
The solver terminated after failure in the restoration phase. This indicates that the restoration phase failed to find a feasible point that was acceptable to the filter line search for the original problem. This could happen if the problem is highly degenerate, does not satisfy the constraint qualification, or if your NLP code provides incorrect derivative information.
The solver terminated after the maximum time allowed was exceeded. Maximum number of seconds exceeded. Use optional parameter Time Limit to reset the limit.
The solver terminated due to an invalid option. Please contact NAG with details of the call to e04stf.
The solver terminated due to an invalid problem definition. Please contact NAG with details of the call to e04stf.
The solver terminated with not enough degrees of freedom. This indicates that your problem, as specified, has too few degrees of freedom. This can happen if you have too many equality constraints, or if you fix too many variables.
The solver terminated after the search direction became too small. This indicates that the solver is calculating very small step sizes and is making very little progress. This could happen if the problem has been solved to the best numerical accuracy possible given the current NLP scaling.
Invalid number detected in user function. Either inform was set to a negative value within the user-supplied functions objfun, objgrd, confun, congrd or hess, or an Infinity or NaN was detected in values returned from them.
The solver reports NLP solved to acceptable level. This indicates that the algorithm did not converge to the desired tolerances, but that it was able to obtain a point satisfying the acceptable tolerance level. This may happen if the desired tolerances are too small for the current problem.
The solver detected an infeasible problem. The restoration phase converged to a point that is a minimizer for the constraint violation (in the -norm), but is not feasible for the original problem. This indicates that the problem may be infeasible (or at least that the algorithm is stuck at a locally infeasible point). The returned point (the minimizer of the constraint violation) might help you to find which constraint is causing the problem. If you believe that the NLP is feasible, it might help to start the optimization from a different point.
The solver terminated due to diverging iterates. The max-norm of the iterates has become larger than a preset value. This can happen if the problem is unbounded below and the iterates are diverging.
e04stf is not available in this implementation.
An unexpected error has been triggered by this routine. Please
contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
The accuracy of the solution is driven by optional parameter Stop Tolerance 1.
If on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to where is the machine precision) and thus it is a good estimate of a local solution. If , some of the convergence conditions were not fully satisfied but the point still seems to be a reasonable estimate and should be usable. Please refer to Section 11.1 and the description of the particular options.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
e04stf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04stf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
9.1Description of the Printed Output
The solver can print information to give an overview of the problem and of the progress of the computation. The output may be sent to two independent streams (files) which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level and Monitoring Level determine the exposed level of detail. This allows, for example, the generation of a detailed log in a file while the condensed information is displayed on the screen. This section also describes what kind of information is made available to the monitoring routine monit via rinfo and stats.
There are four sections printed to the primary output with the default settings (level ): a derivative check, a header, an iteration log and a summary. At higher levels more information will be printed, including any internal IPOPT options that have been changed from their default values.
Header
If , the header will contain option settings and statistics about the size of the problem how the solver sees it, i.e., it reflects any changes imposed by preprocessing and problem transformations. The header may look similar to:
Banner and optional parameters list
------------------------------------------------------------------------------
E04ST, Interior point method for large-scale nonlinear optimization problems
------------------------------------------------------------------------------
Begin of Options
Print File = 6 * d
Print Level = 2 * U
Monitoring File = 67 * U
Monitoring Level = 2 * U
Infinite Bound Size = 1.00000E+20 * d
Task = Minimize * d
Stats Time = No * d
Time Limit = 1.00000E+01 * U
Verify Derivatives = No * d
Hessian Mode = Auto * d
Nlp Factorization Method = Ma97 * d
Matrix Ordering = Auto * d
Outer Iteration Limit = 26 * U
Stop Tolerance 1 = 2.50000E-08 * U
End of Options
Summary of the problem
Number of nonzeros in equality constraint Jacobian...: 4
Number of nonzeros in inequality constraint Jacobian.: 8
Number of nonzeros in Lagrangian Hessian.............: 10
Total number of variables............................: 4
variables with only lower bounds: 4
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 1
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
Derivative Check
If Verify Derivatives is set, then information will appear about any errors detected in the user-supplied derivative routines objgrd, congrd or hess. It may look like this:
Starting derivative checker for first derivatives.
* grad_f[ 1] = -2.000000E+00 ~ 2.455000E+01 [ 1.081E+00]
* jac_g [ 1, 4] = 4.700969E+01 v ~ 5.200968E+01 [ 9.614E-02]
Starting derivative checker for second derivatives.
* obj_hess[ 1, 1] = 1.881000E+03 v ~ 1.882000E+03 [ 5.314E-04]
* 1-th constr_hess[ 1, 3] = 2.988964E+00 v ~ -1.103543E-02 [ 3.000E+00]
Derivative checker detected 3 error(s).
The first line indicates that the value for the partial derivative of the objective with respect to the first variable as returned by objgrd (the first one printed) differs sufficiently from a finite difference estimation derived from objfun (the second one printed). The number in square brackets is the relative difference between these two numbers.
The second line reports on a discrepancy for the partial derivative of the first constraint with respect to the fourth variable. If the indicator v is absent, the discrepancy refers to a component that had not been included in the sparsity structure, in which case the nonzero structure of the derivatives should be corrected. Mistakes in the first derivatives should be corrected before attempting to correct mistakes in the second derivatives.
The third line reports on a discrepancy in a second derivative of the objective function, differentiated with respect to the first variable, twice.
The fourth line reports on a discrepancy in a second derivative of the first constraint, differentiated with respect to the first and third variables.
Iteration log
If , the status of each iteration is condensed to one line. The line shows:
iter
The current iteration count. This includes regular iterations and iterations during the restoration phase. If the algorithm is in the restoration phase, the letter r will be appended to the iteration number. The iteration number represents the starting point. This quantity is also available as of monit.
objective
The unscaled objective value at the current point (given the current NLP scaling). During the restoration phase, this value remains the unscaled objective value for the original problem. This quantity is also available as of monit.
inf_pr
The unscaled constraint violation at the current point (given the current NLP scaling). This quantity is the infinity-norm (max) of the (unscaled) constraints . During the restoration phase, this value remains the constraint violation of the original problem at the current point. This quantity is also available as of monit.
inf_du
The scaled dual infeasibility at the current point (given the current NLP scaling). This quantity measure the infinity-norm (max) of the internal dual infeasibility, of Eq. (4a) in the implementation paper Wächter and Biegler (2006), including inequality constraints reformulated using slack variables and NLP scaling. During the restoration phase, this is the value of the dual infeasibility for the restoration phase problem. This quantity is also available as of monit.
lg(mu)
of the value of the barrier parameter . itself is also available as of monit.
||d||
The infinity norm (max) of the primal step (for the original variables x and the internal slack variables s). During the restoration phase, this value includes the values of additional variables, and (see Eq. (30) in Wächter and Biegler (2006)). This quantity is also available as of monit.
lg(rg)
of the value of the regularization term for the Hessian of the Lagrangian in the augmented system ( of Eq. (26) and Section 3.1 in Wächter and Biegler (2006)). A dash (–) indicates that no regularization was done. The regularization term itself is also available as of monit.
alpha_du
The step size for the dual variables ( of Eq. (14c) in Wächter and Biegler (2006)). This quantity is also available as of monit.
alpha_pr
The step size for the primal variables ( of Eq. (14a) in Wächter and Biegler (2006)). This quantity is also available as of monit. The number is usually followed by a character for additional diagnostic information regarding the step acceptance criterion.
f
f-type iteration in the filter method without second-order correction
F
f-type iteration in the filter method with second-order correction
h
h-type iteration in the filter method without second-order correction
H
h-type iteration in the filter method with second-order correction
k
penalty value unchanged in merit function method without second-order correction
K
penalty value unchanged in merit function method with second-order correction
n
penalty value updated in merit function method without second-order correction
N
penalty value updated in merit function method with second-order correction
R
Restoration phase just started
w
in watchdog procedure
s
step accepted in soft restoration phase
t/T
tiny step accepted without line search
r
some previous iterate restored
ls
The number of backtracking line search steps (does not include second-order correction steps). This quantity is also available as of monit.
Note that the step acceptance mechanisms in IPOPT consider the barrier objective function (4) which is usually different from the value reported in the objective column. Similarly, for the purposes of the step acceptance, the constraint violation is measured for the internal problem formulation, which includes slack variables for inequality constraints and potentially NLP scaling of the constraint functions. This value, too, is usually different from the value reported in inf_pr. As a consequence, a new iterate might have worse values both for the objective function and the constraint violation as reported in the iteration output, seemingly contradicting globalization procedure.
Note that all these values are also available in , , and of the monitoring routine monit.
If , each iteration produces significantly more detailed output comprising detailed error measures and output from internal operations. The output is reasonably self-explanatory so it is not featured here in detail.
Summary
Once the solver finishes, a detailed summary is produced if . An example is shown below:
Number of Iterations....: 6
(scaled) (unscaled)
Objective...............: 7.8692659500479623E-01 6.2324586324379867E+00
Dual infeasibility......: 7.9744615766675617E-10 6.3157735687207093E-09
Constraint violation....: 8.3555384833289281E-12 8.3555384833289281E-12
Complementarity.........: 0.0000000000000000E+00 0.0000000000000000E+00
Overall NLP error.......: 7.9744615766675617E-10 6.3157735687207093E-09
Number of objective function evaluations = 7
Number of objective gradient evaluations = 7
Number of equality constraint evaluations = 7
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 7
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 6
Total CPU secs in IPOPT (w/o function evaluations) = 0.724
Total CPU secs in NLP function evaluations = 0.343
EXIT: Optimal Solution Found.
It starts with the total number of iterations the algorithm went through. Then, five quantities are printed, all evaluated at the termination point: the value of the objective function, the dual infeasibility, the constraint violation, the complementarity and the NLP error.
This is followed by some statistics on the number of calls to user-supplied functions and CPU time taken in user-supplied functions and the main algorithm. Lastly, status at exit is indicated by a short message. Detailed timings of the algorithm are displayed only if Stats Time is set.
9.2Internal Changes
Internal changes have been made to this routine as follows:
At Mark 26.1:
The default for the optional parameter Verify Derivatives was changed from to meaning that the derivatives will not be checked unless you explicitly request them to be and the description of the option was removed.
and no longer produce output, a banner was printed in the previous release.
A new option Task was introduced. It allows you to easily switch between minimization, maximization and feasible point. The previous release assumed minimization which is now the default choice.
A new option Matrix Ordering was introduced. It allows you to choose the fill-reducing ordering for the internal sparse linear algebra solver. Originally, at Mark 26, only AMD ordering was implemented. METIS ordering has now been introduced which is especially efficient for large-scale problems. A heuristic to automatically choose between the two orderings was also added and is now the default choice.
At Mark 27:
The name of the argument ‘mon’ was updated to monit to be consistent with the rest of the NAG optimization modelling suite routines.
At Mark 27.1:
The arguments fdx of objgrd and gdx of congrd are now Intent (Inout) instead of Intent (Out) to stay consistent with the other solvers of the NAG optimization modelling suite.
At Mark 28.3:
Solver e04stf interface was updated. The size of the information arrays rinfo and stats have been increased from to . The intent of the Lagrange multipliers, array u, changed from Intent(Out) to Intent(Inout); this array is only assigned to and not referenced. Also, the intent of the argument hx in user call-back hess was changed from Intent (Out) to Intent (Inout). In what follows, the changes to the interfaces are shown.
Calls to vendor linear algebra routines have been rationalized in order to behave more predictably and consistently in processes which depend on external vendor libraries.
At Mark 30.2:
Another underlying sparse linear algebra solver, Harwell package MA86, was introduced. The option NLP Factorization Method can be used to choose between Harwell package MA86 or MA97.
For details of all known issues which have been reported for the NAG Library please refer to the Known Issues.
9.3Additional Licensor
Parts of the code for e04stf are distributed according to terms imposed by the Eclipse Public License. Please refer to Library Licensors for further details.
10Example
This example is based on Problem 73 in Hock and Schittkowski (1981) and involves the minimization of the linear function
subject to the bounds
to the nonlinear constraint
and the linear constraints
The initial point, which is infeasible, is
and .
The optimal solution (to five significant figures) is
e04stf is an implementation of IPOPT (see Wächter and Biegler (2006)) that is fully supported and maintained by NAG. It uses Harwell packages MA97 or MA86 for the underlying sparse linear algebra factorization. MA86 uses MC68 approximate minimum degree for the ordering and MA97 uses either MC68 approximate minimum degree or METIS algorithm. Any issues relating to e04stf should be directed to NAG who assume all responsibility for the e04stf routine and its implementation.
To simplify notation, we describe the method for the problem formulation
(1)
(2)
(3)
Range constraints of the form can be expressed in this formulation by introducing slack variables , (increasing by ) and defining new equality constraints and .
for a decreasing sequence of barrier parameters converging to zero.
The algorithm may be interpreted as a homotopy method to the primal-dual equations,
(6)
(7)
(8)
with the homotopy parameter , which is driven to zero (see e.g., Byrd et al. (1997) and Gould et al. (2001)). Here, for a vector , similarly , and stands for the vector of all ones for appropriate dimension, while and correspond to the Lagrange multipliers for the equality constraints (2) and the bound constraints (3), respectively.
Note, that the equations (6), (7) and (8) for together with ‘, ’ are the Karush–Kuhn–Tucker (KKT) conditions for the original problem (1), (2) and (3). Those are the first-order optimality conditions for (1), (2) and (3) if constraint qualifications are satisfied (Conn et al. (2000)).
Starting from an initial point supplied in x, e04stf computes an approximate solution to the barrier problem (4) and (5) for a fixed value of (by default, ), then decreases the barrier parameter, and continues the solution of the next barrier problem from the approximate solution of the previous one.
A sophisticated overall termination criterion for the algorithm is used to overcome potential difficulties when the Lagrange multipliers become large. This can happen, for example, when the gradients of the active constraints are nearly linear dependent. The termination criterion is described in detail by Wächter and Biegler (2006) (also see below Section 11.1).
11.1Stopping Criteria
Using the individual parts of the primal-dual equations (6), (7) and (8), we define the optimality error for the barrier problem as
(9)
with scaling parameters , defined below (not to be confused with NLP scaling factors described in Section 11.2). By we denote (9) with ; this measures the optimality error for the original problem (1), (2) and (3). The overall algorithm terminates if an approximate solution (including multiplier estimates) satisfying
(10)
is found, where is the user-supplied error tolerance in optional parameter Stop Tolerance 1.
Even if the original problem is well scaled, the multipliers and might become very large, for example, when the gradients of the active constraints are (nearly) linearly dependent at a solution of (1), (2) and (3). In this case, the algorithm might encounter numerical difficulties satisfying the unscaled primal-dual equations (6), (7) and (8) to a tight tolerance. In order to adapt the termination criteria to handle such circumstances, we choose the scaling factors
in (9). In this way, a component of the optimality error is scaled, whenever the average value of the multipliers becomes larger than a fixed number ( in our implementation). Also note, in the case that the multipliers diverge, can only become small, if a Fritz John point for (1), (2) and (3) is approached, or if the primal variables diverge as well.
11.2Scaling the NLP
Ideally, the formulated problem should be scaled so that, near the solution, all function gradients (objective and constraints), when nonzero, are of a similar order of a magnitude. e04stf will compute automatic NLP scaling factors for the objective and constraint functions (but not the decision variables) and apply them if large imbalances of scale are detected. This rescaling is only computed at the starting point. References to scaled or unscaled objective or constraints in Section 9.1 and Section 11 should be understood in this context.
12Optional Parameters
Several optional parameters in e04stf define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04stf these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmf anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of ), their value can be retrieved by e04znf. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
the keywords, where the minimum abbreviation of each keyword is underlined;
a parameter value,
where the letters , and denote options that take character, integer and real values respectively.
the default value, where the symbol is a generic notation for machine precision (see x02ajf).
All options accept the value to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
Hessian Mode
Default
This parameter specifies whether the Hessian will be user-supplied (in hx) or approximated by e04stf using a limited-memory quasi-Newton L-BFGS method. In the setting, if no Hessian structure has been registered in the problem with a call to e04rlf and there are general nonlinear objective or constraints, then the Hessian will be approximated. Otherwise hess will be called if and only if any of e04rgfore04rkf have been used to define the problem. Approximating the Hessian is likely to require more iterations to achieve convergence but will reduce the time spent in user-supplied functions.
Constraint: , or .
Infinite Bound Size
Default
This defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to will
be regarded as (and similarly any lower bound less than or equal to will be regarded as ). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
It also serves as a limit for the objective function to be considered unbounded ().
Constraint: .
Monitoring File
Default
If , the
unit number
for the secondary (monitoring) output. If set to , no secondary output is provided. The information output to this unit is controlled by Monitoring Level.
Constraint: .
Monitoring Level
Default
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with Print Level.
Constraint: .
Matrix Ordering
Default
If NLP Factorization Method is , this parameter specifies the ordering to be used by the internal sparse linear algebra solver. It affects the number of nonzeros in the factorized matrix and thus influences the cost per iteration.
A heuristic is used to choose automatically between METIS and AMD orderings.
Both AMD and METIS orderings are computed at the beginning of the solve and the one with the fewest nonzeros in the factorized matrix is selected.
An approximate minimum degree (AMD) ordering is used.
This parameter controls whether Harwell package or is used for the sparse linear algebra factorization. To determine which best suits your application, it is recommended to try both and .
Constraint: or .
Outer Iteration Limit
Default
The maximum number of iterations to be performed by e04stf. Setting the option too low might lead to .
Constraint: .
Print File
Default
If , the
unit number
for the primary output of the solver. If , the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by x04abf at the time of the optional parameters initialization, e.g., at the initialization of the handle. The information output to this unit is controlled by Print Level.
Constraint: .
Print Level
Default
This parameter defines how detailed information should be printed by the solver to the primary output.
Output
No output from the solver
Additionally, derivative check information, the Header and Summary.
Additionally, the Iteration log.
,
Additionally, details of each iteration with scalar quantities printed.
Additionally, individual components of arrays are printed resulting in large output.
Constraint: .
Print Options
Default
If , a listing of optional parameters will be printed to the primary output.
Constraint: or .
Print Solution
Default
If , the final values of the primal variables are printed on the primary and secondary outputs.
If or , in addition to the primal variables, the final values of the dual variables are printed on the primary and secondary outputs.
Constraint: , , or .
Stats Time
Default
This parameter allows you to turn on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches.
Constraint: or .
Stop Tolerance 1
Default
This option sets the value of (10) which is used for optimality and complementarity tests from KKT conditions. See Section 11.1.
Constraint: .
Task
Default
This parameter specifies the required direction of the optimization. If , the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function is set, Task reverts to automatically.
Constraint: , or .
Time Limit
Default
A limit to the number of seconds that the solver can use to solve one problem. If during the convergence check this limit is exceeded, the solver will terminate with a corresponding error message.
Constraint: .
Verify Derivatives
Default
This parameter specifies whether the routine should perform numerical checks on the consistency of the user-supplied functions. It is recommended that such checks are enabled when first developing the formulation of the problem, however, the derivative check results in a significant increase of the number of the function evaluations and thus it shouldn't be used in production code.