naginterfaces.library.opt.nlp1_​sparse_​solve

naginterfaces.library.opt.nlp1_sparse_solve(m, ncnln, nonln, njnln, iobj, a, ha, ka, bl, bu, start, names, ns, xs, istate, clamda, comm, confun=None, objfun=None, leniz=None, lenz=500, data=None, io_manager=None)[source]

nlp1_sparse_solve solves sparse nonlinear programming problems.

Note: this function uses optional algorithmic parameters, see also: nlp1_sparse_option_file(), nlp1_sparse_option_string(), nlp1_init().

Deprecated since version 28.3.0.0: nlp1_sparse_solve is deprecated. Please use handle_solve_ssqp() instead. See also the Replacement Calls document.

For full information please refer to the NAG Library document for e04ug

https://support.nag.com/numeric/nl/nagdoc_30.2/flhtml/e04/e04ugf.html

Parameters
mint

, the number of general constraints (or slacks). This is the number of rows in , including the free row (if any; see ). Note that must contain at least one row. If your problem has no constraints, or only upper and lower bounds on the variables, then you must include a dummy ‘free’ row consisting of a single (zero) element subject to ‘infinite’ upper and lower bounds. Further details can be found under the descriptions for , , , , , and .

ncnlnint

, the number of nonlinear constraints.

nonlnint

, the number of nonlinear objective variables. If the objective function is nonlinear, the leading columns of belong to the nonlinear objective variables. (See also the description for .)

njnlnint

, the number of nonlinear Jacobian variables. If there are any nonlinear constraints, the leading columns of belong to the nonlinear Jacobian variables. If and , the nonlinear objective and Jacobian variables overlap. The total number of nonlinear variables is given by .

iobjint

If , row of is a free row containing the nonzero elements of the linear part of the objective function.

There is no free row.

There is a dummy ‘free’ row.

afloat, array-like, shape

The nonzero elements of the Jacobian matrix , ordered by increasing column index. Since the constraint Jacobian matrix must always appear in the top left-hand corner of , those elements in a column associated with any nonlinear constraints must come before any elements belonging to the linear constraint matrix and the free row (if any; see ).

In general, is partitioned into a nonlinear part and a linear part corresponding to the nonlinear variables and linear variables in the problem.

Elements in the nonlinear part may be set to any value (e.g., zero) because they are initialized at the first point that satisfies the linear constraints and the upper and lower bounds.

If or , the nonlinear part may also be used to store any constant Jacobian elements.

Note that if does not define the constant Jacobian element , then the missing value will be obtained directly from for some .

If or , unassigned elements of are not treated as constant; they are estimated by finite differences, at nontrivial expense.

The linear part must contain the nonzero elements of and the free row (if any).

If , set .

Elements with the same row and column indices are not allowed. (See also the descriptions for and .)

haint, array-like, shape

must contain the row index of the nonzero element stored in , for . The row indices for a column may be supplied in any order subject to the condition that those elements in a column associated with any nonlinear constraints must appear before those elements associated with any linear constraints (including the free row, if any). Note that must define the Jacobian elements in the same order. If , set .

kaint, array-like, shape

must contain the index in of the start of the th column, for . To specify the th column as empty, set . Note that the first and last elements of must be such that and . If , set , for .

blfloat, array-like, shape

, the lower bounds for all the variables and general constraints, in the following order. The first elements of must contain the bounds on the variables , the next elements the bounds for the nonlinear constraints (if any) and the next () elements the bounds for the linear constraints and the free row (if any). To specify a nonexistent lower bound (i.e., ), set . To specify the th constraint as an equality, set , say, where . If , set .

bufloat, array-like, shape

, the upper bounds for all the variables and general constraints, in the following order. The first elements of must contain the bounds on the variables , the next elements the bounds for the nonlinear constraints (if any) and the next () elements the bounds for the linear constraints and the free row (if any). To specify a nonexistent upper bound (i.e., ), set . To specify the th constraint as an equality, set , say, where . If , set .

startstr, length 1

Indicates how a starting basis is to be obtained.

An internal Crash procedure will be used to choose an initial basis.

A basis is already defined in and (probably from a previous call).

namesstr, length 8, array-like, shape

Specifies the column and row names to be used in the printed output.

If , is not referenced and the printed output will use default names for the columns and rows.

If , the first elements must contain the names for the columns, the next elements must contain the names for the nonlinear rows (if any) and the next elements must contain the names for the linear rows (if any) to be used in the printed output.

Note that the name for the free row or dummy ‘free’ row must be stored in .

nsint

, the number of superbasics. It need not be specified if , but must retain its value from a previous call when .

xsfloat, array-like, shape

The initial values of the variables and slacks . (See the description for .)

istateint, array-like, shape

If , the first elements of and must specify the initial states and values, respectively, of the variables . (The slacks need not be initialized.) An internal Crash procedure is then used to select an initial basis matrix . The initial basis matrix will be triangular (neglecting certain small elements in each column). It is chosen from various rows and columns of . Possible values for are as follows:

State of during Crash procedure

or

Eligible for the basis

Ignored

Eligible for the basis (given preference over or )

or

Ignored

If nothing special is known about the problem, or there is no wish to provide special information, you may set and , for .

All variables will then be eligible for the initial basis.

Less trivially, to say that the th variable will probably be equal to one of its bounds, set and or and as appropriate.

Following the Crash procedure, variables for which are made superbasic.

Other variables not selected for the basis are then made nonbasic at the value if , or at the value or closest to .

If , and must specify the initial states and values, respectively, of the variables and slacks .

If the function has been called previously with the same values of and , already contains satisfactory information.

clamdafloat, array-like, shape

If , must contain a Lagrange multiplier estimate for the th nonlinear constraint , for . If nothing special is known about the problem, or there is no wish to provide special information, you may set . The remaining elements need not be set.

commdict, communication object, modified in place

Communication structure.

This argument must have been initialized by a prior call to nlp1_init().

confunNone or callable (mode, f, fjac) = confun(mode, ncnln, x, fjac, nstate, data=None), optional

Note: if this argument is None then a NAG-supplied facility will be used.

must calculate the vector of nonlinear constraint functions and (optionally) its Jacobian for a specified () element vector .

If there are no nonlinear constraints (i.e., ), will never be called by nlp1_sparse_solve and may be None. If there are nonlinear constraints, the first call to will occur before the first call to .

Parameters
modeint

Indicates which values must be assigned during each call of . Only the following values need be assigned:

.

All available elements of .

and all available elements of .

ncnlnint

, the number of nonlinear constraints. These must be the first constraints in the problem.

xfloat, ndarray, shape

, the vector of nonlinear Jacobian variables at which the nonlinear constraint functions and/or the available elements of the constraint Jacobian are to be evaluated.

fjacfloat, ndarray, shape

The elements of are set to special values which enable nlp1_sparse_solve to detect whether they are changed by .

nstateint

If , then nlp1_sparse_solve is calling for the first time. This argument setting allows you to save computation time if certain data must be read or calculated only once.

If , nlp1_sparse_solve is calling for the last time.

This argument setting allows you to perform some additional computation on the final solution.

In general, the last call to is made with (see Exceptions).

Otherwise, .

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
modeint

You may set to a negative value as follows:

The solution to the current problem is terminated and in this case nlp1_sparse_solve will terminate with set to .

The nonlinear constraint functions cannot be calculated at the current . nlp1_sparse_solve will then terminate with = -1 unless this occurs during the linesearch; in this case, the linesearch will shorten the step and try again.

ffloat, array-like, shape

If or , must contain the value of the th nonlinear constraint function at .

fjacfloat, array-like, shape

If or , must return the available elements of the constraint Jacobian evaluated at . These elements must be stored in exactly the same positions as implied by the definitions of the arrays , and . If option or , the value of any constant Jacobian element not defined by will be obtained directly from . Note that the function does not perform any internal checks for consistency (except indirectly via the option ‘Verify Level’), so great care is essential.

objfunNone or callable (mode, objf, objgrd) = objfun(mode, x, objgrd, nstate, data=None), optional

Note: if this argument is None then a NAG-supplied facility will be used.

must calculate the nonlinear part of the objective function and (optionally) its gradient for a specified () element vector .

If there are no nonlinear objective variables (i.e., ), will never be called by nlp1_sparse_solve and may be None.

Parameters
modeint

Indicates which values must be assigned during each call of . Only the following values need be assigned:

.

All available elements of .

and all available elements of .

xfloat, ndarray, shape

, the vector of nonlinear variables at which the nonlinear part of the objective function and/or all available elements of its gradient are to be evaluated.

objgrdfloat, ndarray, shape

The elements of are set to special values which enable nlp1_sparse_solve to detect whether they are changed by .

nstateint

If , nlp1_sparse_solve is calling for the first time. This argument setting allows you to save computation time if certain data must be read or calculated only once.

If , nlp1_sparse_solve is calling for the last time.

This argument setting allows you to perform some additional computation on the final solution.

In general, the last call to is made with (see Exceptions).

Otherwise, .

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
modeint

You may set to a negative value as follows:

The solution to the current problem is terminated and in this case nlp1_sparse_solve will terminate with set to .

The nonlinear part of the objective function cannot be calculated at the current . nlp1_sparse_solve will then terminate with = -1 unless this occurs during the linesearch; in this case, the linesearch will shorten the step and try again.

objffloat

If or , must be set to the value of the objective function at .

objgrdfloat, array-like, shape

If or , must return the available elements of the gradient evaluated at .

lenizNone or int, optional

Note: if this argument is None then a default value will be used, determined as follows: .

The dimension of the internal workspace array .

lenzint, optional

The dimension of the internal workspace array .

dataarbitrary, optional

User-communication data for callback functions.

io_managerFileObjManager, optional

Manager for I/O in this routine.

Returns
afloat, ndarray, shape

Elements in the nonlinear part corresponding to nonlinear Jacobian variables are overwritten.

nsint

The final number of superbasics.

xsfloat, ndarray, shape

The final values of the variables and slacks .

istateint, ndarray, shape

The final states of the variables and slacks . The significance of each possible value of is as follows:

State of variable

Normal value of

Nonbasic

Nonbasic

Superbasic

Between and

Basic

Between and

If , basic and superbasic variables may be outside their bounds by as much as the value of the option ‘Minor Feasibility Tolerance’.

Note that if scaling is specified, the option ‘Minor Feasibility Tolerance’ applies to the variables of the scaled problem.

In this case, the variables of the original problem may be as much as outside their bounds, but this is unlikely unless the problem is very badly scaled.

Very occasionally some nonbasic variables may be outside their bounds by as much as the option ‘Minor Feasibility Tolerance’ and there may be some nonbasic variables for which lies strictly between its bounds.

If , some basic and superbasic variables may be outside their bounds by an arbitrary amount (bounded by if scaling was not used).

clamdafloat, ndarray, shape

A set of Lagrange multipliers for the bounds on the variables (reduced costs) and the general constraints (shadow costs). More precisely, the first elements contain the multipliers for the bounds on the variables, the next elements contain the multipliers for the nonlinear constraints (if any) and the next () elements contain the multipliers for the linear constraints and the free row (if any).

minizint

The minimum value of required to start solving the problem. If = 12, nlp1_sparse_solve may be called again with suitably larger than . (The bigger the better, since it is not certain how much workspace the basis factors need.)

minzint

The minimum value of required to start solving the problem. If = 13, nlp1_sparse_solve may be called again with suitably larger than . (The bigger the better, since it is not certain how much workspace the basis factors need.)

ninfint

The number of constraints that lie outside their bounds by more than the value of the option ‘Minor Feasibility Tolerance’.

If the linear constraints are infeasible, the sum of the infeasibilities of the linear constraints is minimized subject to the upper and lower bounds being satisfied.

In this case, contains the number of elements of that lie outside their upper or lower bounds.

Note that the nonlinear constraints are not evaluated.

Otherwise, the sum of the infeasibilities of the nonlinear constraints is minimized subject to the linear constraints and the upper and lower bounds being satisfied.

In this case, contains the number of elements of that lie outside their upper or lower bounds.

sinffloat

The sum of the infeasibilities of constraints that lie outside their bounds by more than the value of the option ‘Minor Feasibility Tolerance’.

objfloat

The value of the objective function.

Other Parameters
‘Central Difference Interval’float

Default

Note that this option does not apply when .

The value of is used near an optimal solution in order to obtain more accurate (but more expensive) estimates of gradients. This requires twice as many function evaluations as compared to using forward differences (see option ‘Forward Difference Interval’). The interval used for the th variable is . The resulting gradient estimates should be accurate to , unless the functions are badly scaled. The switch to central differences is indicated by c at the end of each line of intermediate printout produced by the major iterations (see Major Iteration Printout). See Gill et al. (1981) for a discussion of the accuracy in finite difference approximations.

If , the default value is used.

‘Check Frequency’int

Default

Every th minor iteration after the most recent basis factorization, a numerical test is made to see if the current solution satisfies the general linear constraints (including any linearized nonlinear constraints). The constraints are of the form , where is the set of slack variables. If the largest element of the residual vector is judged to be too large, the current basis is refactorized and the basic variables recomputed to satisfy the general constraints more accurately.

If , the default value is used. If , the value is used and effectively no checks are made.

‘Crash Option’int

Default

The default value of is if there are any nonlinear constraints and otherwise. Note that this option does not apply when (see Parameters).

If , an internal Crash procedure is used to select an initial basis from various rows and columns of the constraint matrix . The value of determines which rows and columns of are initially eligible for the basis and how many times the Crash procedure is called. Columns of are used to pad the basis where necessary. The possible choices for are the following.

Meaning

0

The initial basis contains only slack variables: .

1

The Crash procedure is called once (looking for a triangular basis in all rows and columns of ).

2

The Crash procedure is called twice (if there are any nonlinear constraints). The first call looks for a triangular basis in linear rows and the iteration proceeds with simplex iterations until the linear constraints are satisfied. The Jacobian is then evaluated for the first major iteration and the Crash procedure is called again to find a triangular basis in the nonlinear rows (whilst retaining the current basis for linear rows).

3

The Crash procedure is called up to three times (if there are any nonlinear constraints). The first two calls treat linear equality constraints and linear inequality constraints separately. The Jacobian is then evaluated for the first major iteration and the Crash procedure is called again to find a triangular basis in the nonlinear rows (whilst retaining the current basis for linear rows).

If or , the default value is used.

If , certain slacks on inequality rows are selected for the basis first. (If , numerical values are used to exclude slacks that are close to a bound.) The Crash procedure then makes several passes through the columns of , searching for a basis matrix that is essentially triangular. A column is assigned to ‘pivot’ on a particular row if the column contains a suitably large element in a row that has not yet been assigned. (The pivot elements ultimately form the diagonals of the triangular basis.) For remaining unassigned rows, slack variables are inserted to complete the basis.

‘Crash Tolerance’float

Default

The value () allows the Crash procedure to ignore certain ‘small’ nonzero elements in the columns of while searching for a triangular basis. If is the largest element in the th column, other nonzeros in the column are ignored if .

When , the basis obtained by the Crash procedure may not be strictly triangular, but it is likely to be nonsingular and almost triangular. The intention is to obtain a starting basis containing more columns of and fewer (arbitrary) slacks. A feasible solution may be reached earlier on some problems.

If or , the default value is used.

‘Defaults’valueless

This special keyword may be used to reset all options to their default values.

‘Derivative Level’int

Default

This argument indicates which nonlinear function gradients are provided in functions and . The possible choices for are the following.

Meaning

3

All elements of the objective gradient and the constraint Jacobian are provided.

2

All elements of the constraint Jacobian are provided, but some (or all) elements of the objective gradient are not specified.

1

All elements of the objective gradient are provided, but some (or all) elements of the constraint Jacobian are not specified.

0

Some (or all) elements of both the objective gradient and the constraint Jacobian are not specified.

The default value should be used whenever possible. It is the most reliable and will usually be the most efficient.

If or , nlp1_sparse_solve will estimate the unspecified elements of the objective gradient, using finite differences. This may simplify the coding of . However, the computation of finite difference approximations usually increases the total run-time substantially (since a call to is required for each unspecified element) and there is less assurance that an acceptable solution will be found.

If or , nlp1_sparse_solve will approximate unspecified elements of the constraint Jacobian. For each column of the Jacobian, one call to is needed to estimate all unspecified elements in that column (if any). For example, if the sparsity pattern of the Jacobian has the form

where ‘’ indicates an element provided and ‘?’ indicates an unspecified element, nlp1_sparse_solve will call twice: once to estimate the missing element in column and again to estimate the two missing elements in column . (Since columns and are known, they require no calls to .)

At times, central differences are used rather than forward differences, in which case twice as many calls to and are needed. (The switch to central differences is not under your control.)

If or , the default value is used.

‘Derivative Linesearch’valueless

Default

At each major iteration, a linesearch is used to improve the value of the Lagrangian merit function [equation]. The default linesearch uses safeguarded cubic interpolation and requires both function and gradient values in order to compute estimates of the step . If some analytic derivatives are not provided or option ‘Nonderivative Linesearch’ is specified, a linesearch based upon safeguarded quadratic interpolation (which does not require the evaluation or approximation of any gradients) is used instead.

A nonderivative linesearch can be slightly less robust on difficult problems and it is recommended that the default be used if the functions and their derivatives can be computed at approximately the same cost. If the gradients are very expensive to compute relative to the functions however, a nonderivative linesearch may result in a significant decrease in the total run-time.

If option ‘Nonderivative Linesearch’ is selected, nlp1_sparse_solve signals the evaluation of the linesearch by calling and with . Once the linesearch is complete, the nonlinear functions are re-evaluated with . If the potential savings offered by a nonderivative linesearch are to be fully realised, it is essential that and be coded so that no derivatives are computed when .

‘Nonderivative Linesearch’valueless

At each major iteration, a linesearch is used to improve the value of the Lagrangian merit function [equation]. The default linesearch uses safeguarded cubic interpolation and requires both function and gradient values in order to compute estimates of the step . If some analytic derivatives are not provided or option ‘Nonderivative Linesearch’ is specified, a linesearch based upon safeguarded quadratic interpolation (which does not require the evaluation or approximation of any gradients) is used instead.

A nonderivative linesearch can be slightly less robust on difficult problems and it is recommended that the default be used if the functions and their derivatives can be computed at approximately the same cost. If the gradients are very expensive to compute relative to the functions however, a nonderivative linesearch may result in a significant decrease in the total run-time.

If option ‘Nonderivative Linesearch’ is selected, nlp1_sparse_solve signals the evaluation of the linesearch by calling and with . Once the linesearch is complete, the nonlinear functions are re-evaluated with . If the potential savings offered by a nonderivative linesearch are to be fully realised, it is essential that and be coded so that no derivatives are computed when .

‘Elastic Weight’float

Default or

The default value of is if there are any nonlinear constraints and otherwise.

This option defines the initial weight associated with problem [equation].

At any given major iteration , elastic mode is entered if the QP subproblem is infeasible or the QP dual variables (Lagrange multipliers) are larger in magnitude than , where is the objective gradient. In either case, the QP subproblem is resolved in elastic mode with .

Thereafter, is increased (subject to a maximum allowable value) at any point that is optimal for problem [equation], but not feasible for problem (1). After the th increase, , where is the iterate at which was first needed.

If , the default value is used.

‘Expand Frequency’int

Default

This option is part of the EXPAND anti-cycling procedure due to Gill et al. (1989), which is designed to make progress even on highly degenerate problems.

For linear models, the strategy is to force a positive step at every iteration, at the expense of violating the constraints by a small amount. Suppose that the value of option ‘Minor Feasibility Tolerance’ is . Over a period of iterations, the feasibility tolerance actually used by nlp1_sparse_solve (i.e., the working feasibility tolerance) increases from to (in steps of ).

For nonlinear models, the same procedure is used for iterations in which there is only one superbasic variable. (Cycling can only occur when the current solution is at a vertex of the feasible region.) Thus, zero steps are allowed if there is more than one superbasic variable, but otherwise positive steps are enforced.

Increasing the value of helps reduce the number of slightly infeasible nonbasic basic variables (most of which are eliminated during the resetting procedure). However, it also diminishes the freedom to choose a large pivot element (see option ‘Pivot Tolerance’).

If , the default value is used. If , the value is used and effectively no anti-cycling procedure is invoked.

‘Factorization Frequency’int

Default

The default value of is if there are any nonlinear constraints and otherwise.

If , at most basis changes will occur between factorizations of the basis matrix.

For linear problems, the basis factors are usually updated at every iteration. The default value is reasonable for typical problems, particularly those that are extremely sparse and well-scaled.

When the objective function is nonlinear, fewer basis updates will occur as the solution is approached. The number of iterations between basis factorizations will, therefore, increase. During these iterations a test is made regularly according to the value of option ‘Check Frequency’ to ensure that the general constraints are satisfied. If necessary, the basis will be refactorized before the limit of updates is reached.

If , the default value is used.

‘Infeasible Exit’valueless

Default

Note that this option is ignored if the value of option ‘Major Iteration Limit’ is exceeded, or the linear constraints are infeasible.

If termination is about to occur at a point that does not satisfy the nonlinear constraints and option ‘Feasible Exit’ is selected, this option requests that additional iterations be performed in order to find a feasible point (if any) for the nonlinear constraints. This involves solving a feasible point problem in which the objective function is omitted.

Otherwise, this option requests no additional iterations be performed.

‘Feasible Exit’valueless

Note that this option is ignored if the value of option ‘Major Iteration Limit’ is exceeded, or the linear constraints are infeasible.

If termination is about to occur at a point that does not satisfy the nonlinear constraints and option ‘Feasible Exit’ is selected, this option requests that additional iterations be performed in order to find a feasible point (if any) for the nonlinear constraints. This involves solving a feasible point problem in which the objective function is omitted.

Otherwise, this option requests no additional iterations be performed.

‘Minimize’valueless

Default

If option ‘Feasible Point’ is selected, this option attempts to find a feasible point (if any) for the nonlinear constraints by omitting the objective function. It can also be used to check whether the nonlinear constraints are feasible.

Otherwise, this option specifies the required direction of the optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes and the other maximizes , their solutions will be the same but the signs of the dual variables and the reduced gradients will be reversed.

‘Maximize’valueless

If option ‘Feasible Point’ is selected, this option attempts to find a feasible point (if any) for the nonlinear constraints by omitting the objective function. It can also be used to check whether the nonlinear constraints are feasible.

Otherwise, this option specifies the required direction of the optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes and the other maximizes , their solutions will be the same but the signs of the dual variables and the reduced gradients will be reversed.

‘Feasible Point’valueless

If option ‘Feasible Point’ is selected, this option attempts to find a feasible point (if any) for the nonlinear constraints by omitting the objective function. It can also be used to check whether the nonlinear constraints are feasible.

Otherwise, this option specifies the required direction of the optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes and the other maximizes , their solutions will be the same but the signs of the dual variables and the reduced gradients will be reversed.

‘Forward Difference Interval’float

Default

This option defines an interval used to estimate derivatives by forward differences in the following circumstances:

  1. For verifying the objective and/or constraint gradients (see the description of the option ‘Verify Level’).

  2. For estimating unspecified elements of the objective gradient and/or the constraint Jacobian.

A derivative with respect to is estimated by perturbing that element of to the value and then evaluating and/or (as appropriate) at the perturbed point. The resulting gradient estimates should be accurate to , unless the functions are badly scaled. Judicious alteration of may sometimes lead to greater accuracy. See Gill et al. (1981) for a discussion of the accuracy in finite difference approximations.

If , the default value is used.

‘Function Precision’float

Default

This argument defines the relative function precision , which is intended to be a measure of the relative accuracy with which the nonlinear functions can be computed. For example, if (or ) is computed as for some relevant and the first significant digits are known to be correct, then the appropriate value for would be .

Ideally the functions or should have magnitude of order . If all functions are substantially less than in magnitude, should be the absolute precision. For example, if (or ) is computed as for some relevant and the first significant digits are known to be correct, then the appropriate value for would be .

The choice of can be quite complicated for badly scaled problems; see Module 8 of Gill et al. (1981) for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy.

In some cases the function values will be the result of extensive computation, possibly involving an iterative procedure that can provide few digits of precision at reasonable cost. Specifying an appropriate value of may, therefore, lead to savings, by allowing the linesearch procedure to terminate when the difference between function values along the search direction becomes as small as the absolute error in the values.

If or , the default value is used.

‘Hessian Frequency’int

Default

This option forces the approximate Hessian formed from BFGS updates to be reset to the identity matrix upon completion of a major iteration. It is intended to be used in conjunction with option ‘Hessian Full Memory’.

If , the default value is used and effectively no resets occur.

‘Hessian Full Memory’valueless

Default when

These options specify the method for storing and updating the quasi-Newton approximation to the Hessian of the Lagrangian function.

If ‘Hessian Full Memory’ is specified, the approximate Hessian is treated as a dense matrix and BFGS quasi-Newton updates are applied explicitly. This is most efficient when the total number of nonlinear variables is not too large (say, ). In this case, the storage requirement is fixed and you can expect -step Q-superlinear convergence to the solution.

‘Hessian Limited Memory’ should only be specified when is very large. In this case a limited memory procedure is used to update a diagonal Hessian approximation a limited number of times. (Updates are accumulated as a list of vector pairs. They are discarded at regular intervals after has been reset to their diagonal.)

Note that if is used in conjunction with ‘Hessian Full Memory’, the effect will be similar to using ‘Hessian Limited Memory’ in conjunction with , except that the latter will retain the current diagonal during resets.

‘Hessian Limited Memory’valueless

Default when

These options specify the method for storing and updating the quasi-Newton approximation to the Hessian of the Lagrangian function.

If ‘Hessian Full Memory’ is specified, the approximate Hessian is treated as a dense matrix and BFGS quasi-Newton updates are applied explicitly. This is most efficient when the total number of nonlinear variables is not too large (say, ). In this case, the storage requirement is fixed and you can expect -step Q-superlinear convergence to the solution.

‘Hessian Limited Memory’ should only be specified when is very large. In this case a limited memory procedure is used to update a diagonal Hessian approximation a limited number of times. (Updates are accumulated as a list of vector pairs. They are discarded at regular intervals after has been reset to their diagonal.)

Note that if is used in conjunction with ‘Hessian Full Memory’, the effect will be similar to using ‘Hessian Limited Memory’ in conjunction with , except that the latter will retain the current diagonal during resets.

‘Hessian Updates’int

Default or

The default value of is when ‘Hessian Limited Memory’ is in effect and when ‘Hessian Full Memory’ is in effect, in which case no updates are performed.

If ‘Hessian Limited Memory’ is in effect, this option defines the maximum number of pairs of Hessian update vectors that are to be used to define the quasi-Newton approximate Hessian. Once the limit of updates is reached, all but the diagonal elements of the accumulated updates are discarded and the process starts again. Broadly speaking, the more updates that are stored, the better the quality of the approximate Hessian. On the other hand, the more vectors that are stored, the greater the cost of each QP iteration.

The default value of is likely to give a robust algorithm without significant expense, but faster convergence may be obtained with far fewer updates (e.g., ).

If , the default value is used.

‘Infinite Bound Size’float

Default

If , defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to will be regarded as (and similarly any lower bound less than or equal to will be regarded as ).

If , the default value is used.

‘Iteration Limit’int

Default

The value of specifies the maximum number of minor iterations allowed (i.e., iterations of the simplex method or the QP algorithm), summed over all major iterations. (See also the description of the options ‘Major Iteration Limit’ and ‘Minor Iteration Limit’.)

If , the default value is used.

‘Linesearch Tolerance’float

Default

This option controls the accuracy with which a step length will be located along the direction of search at each iteration. At the start of each linesearch a target directional derivative for the Lagrangian merit function is identified. The value of , therefore, determines the accuracy to which this target value is approximated.

The default value requests an inaccurate search and is appropriate for most problems, particularly those with any nonlinear constraints.

If the nonlinear functions are cheap to evaluate, a more accurate search may be appropriate; try or . The number of major iterations required to solve the problem might decrease.

If the nonlinear functions are expensive to evaluate, a less accurate search may be appropriate. If , try . (The number of major iterations required to solve the problem might increase, but the total number of function evaluations may decrease enough to compensate.)

If , a moderately accurate search may be appropriate; try . Each search will (typically) require only function values, but many function calls will then be needed to estimate the missing gradients for the next iteration.

If or , the default value is used.

‘List’valueless

Option ‘List’ enables printing of each option specification as it is supplied. ‘Nolist’ suppresses this printing.

‘Nolist’valueless

Default

Option ‘List’ enables printing of each option specification as it is supplied. ‘Nolist’ suppresses this printing.

‘LU Density Tolerance’float

Default

If , defines the density tolerance used during the factorization of the basis matrix. Columns of and rows of are formed one at a time and the remaining rows and columns of the basis are altered appropriately. At any stage, if the density of the remaining matrix exceeds , the Markowitz strategy for choosing pivots is terminated. The remaining matrix is then factorized using a dense procedure. Increasing the value of towards unity may give slightly sparser factors, with a slight increase in factorization time. If , the default value is used.

If , defines the singularity tolerance used to guard against ill-conditioned basis matrices. Whenever the basis is refactorized, the diagonal elements of are tested as follows. If or , the th column of the basis is replaced by the corresponding slack variable. This is most likely to occur when (see Parameters), or at the start of a major iteration. If , the default value is used.

In some cases, the Jacobian matrix may converge to values that make the basis exactly singular (e.g., a whole row of the Jacobian matrix could be zero at an optimal solution). Before exact singularity occurs, the basis could become very ill-conditioned and the optimization could progress very slowly (if at all). Setting (say) may, therefore, help cause a judicious change of basis in such situations.

‘LU Singularity Tolerance’float

Default

If , defines the density tolerance used during the factorization of the basis matrix. Columns of and rows of are formed one at a time and the remaining rows and columns of the basis are altered appropriately. At any stage, if the density of the remaining matrix exceeds , the Markowitz strategy for choosing pivots is terminated. The remaining matrix is then factorized using a dense procedure. Increasing the value of towards unity may give slightly sparser factors, with a slight increase in factorization time. If , the default value is used.

If , defines the singularity tolerance used to guard against ill-conditioned basis matrices. Whenever the basis is refactorized, the diagonal elements of are tested as follows. If or , the th column of the basis is replaced by the corresponding slack variable. This is most likely to occur when (see Parameters), or at the start of a major iteration. If , the default value is used.

In some cases, the Jacobian matrix may converge to values that make the basis exactly singular (e.g., a whole row of the Jacobian matrix could be zero at an optimal solution). Before exact singularity occurs, the basis could become very ill-conditioned and the optimization could progress very slowly (if at all). Setting (say) may, therefore, help cause a judicious change of basis in such situations.

‘LU Factor Tolerance’float

Default or

The default value of is if there are any nonlinear constraints and otherwise. The default value of is if there are any nonlinear constraints and otherwise.

If and , the values of and affect the stability and sparsity of the basis factorization , during refactorization and updating, respectively. The lower triangular matrix is a product of matrices of the form

where the multipliers satisfy . Smaller values of favour stability, while larger values favour sparsity. The default values of and usually strike a good compromise. For large and relatively dense problems, setting or (say) may give a marked improvement in sparsity without impairing stability to a serious degree. Note that for problems involving band matrices, it may be necessary to reduce and/or in order to achieve stability.

If or , the appropriate default value is used.

‘LU Update Tolerance’float

Default or

The default value of is if there are any nonlinear constraints and otherwise. The default value of is if there are any nonlinear constraints and otherwise.

If and , the values of and affect the stability and sparsity of the basis factorization , during refactorization and updating, respectively. The lower triangular matrix is a product of matrices of the form

where the multipliers satisfy . Smaller values of favour stability, while larger values favour sparsity. The default values of and usually strike a good compromise. For large and relatively dense problems, setting or (say) may give a marked improvement in sparsity without impairing stability to a serious degree. Note that for problems involving band matrices, it may be necessary to reduce and/or in order to achieve stability.

If or , the appropriate default value is used.

‘Major Feasibility Tolerance’float

Default

This option specifies how accurately the nonlinear constraints should be satisfied. The default value is appropriate when the linear and nonlinear constraints contain data to approximately that accuracy. A larger value may be appropriate if some of the problem functions are known to be of low accuracy.

Let rowerr be defined as the maximum nonlinear constraint violation normalized by the size of the solution. It is required to satisfy

where is the violation of the th nonlinear constraint.

If , the default value is used.

‘Major Iteration Limit’int

Default

The value of specifies the maximum number of major iterations allowed before termination. It is intended to guard against an excessive number of linearizations of the nonlinear constraints. Setting and means that the objective and constraint gradients will be checked if and the workspace needed to start solving the problem will be computed and printed, but no iterations will be performed.

If , the default value is used.

‘Major Optimality Tolerance’float

Default

This option specifies the final accuracy of the dual variables. If nlp1_sparse_solve terminates with no exception or warning is raised, a primal and dual solution () will have been computed such that

where is an estimate of the complementarity gap for the th variable and is a measure of the size of the QP dual variables (or Lagrange multipliers) given by

It is included to make the tests independent of a scale factor on the objective function. Specifically, is computed from the final QP solution using the reduced gradients , where is the th element of the objective gradient and is the associated column of the constraint matrix :

If , the default value is used.

‘Optimality Tolerance’float

Default

This option specifies the final accuracy of the dual variables. If nlp1_sparse_solve terminates with no exception or warning is raised, a primal and dual solution () will have been computed such that

where is an estimate of the complementarity gap for the th variable and is a measure of the size of the QP dual variables (or Lagrange multipliers) given by

It is included to make the tests independent of a scale factor on the objective function. Specifically, is computed from the final QP solution using the reduced gradients , where is the th element of the objective gradient and is the associated column of the constraint matrix :

If , the default value is used.

‘Major Print Level’int

Default

The value of controls the amount of printout produced by the major iterations of nlp1_sparse_solve, as indicated below. A detailed description of the printed output is given in Major Iteration Printout (summary output at each major iteration and the final solution) and Monitoring Information (monitoring information at each major iteration). (See also the description of the option ‘Minor Print Level’.)

The following printout is sent to the file object associated with the advisory I/O unit (see FileObjManager):

Output

No output.

The final solution only.

One line of summary output ( characters; see Major Iteration Printout) for each major iteration (no printout of the final solution).

The final solution and one line of summary output for each major iteration.

The following printout is sent to the unit number given by the option ‘Monitoring File’:

Output

No output.

The final solution only.

One long line of output ( characters; see Monitoring Information) for each major iteration (no printout of the final solution).

The final solution and one long line of output for each major iteration.

The final solution, one long line of output for each major iteration, matrix statistics (initial status of rows and columns, number of elements, density, biggest and smallest elements, etc.), details of the scale factors resulting from the scaling procedure (if or ), basis factorization statistics and details of the initial basis resulting from the Crash procedure (if ; see Parameters).

If and the unit number defined by the option ‘Monitoring File’ is the advisory unit number, the summary output for each major iteration is suppressed.

‘Print Level’int

The value of controls the amount of printout produced by the major iterations of nlp1_sparse_solve, as indicated below. A detailed description of the printed output is given in Major Iteration Printout (summary output at each major iteration and the final solution) and Monitoring Information (monitoring information at each major iteration). (See also the description of the option ‘Minor Print Level’.)

The following printout is sent to the file object associated with the advisory I/O unit (see FileObjManager):

Output

No output.

The final solution only.

One line of summary output ( characters; see Major Iteration Printout) for each major iteration (no printout of the final solution).

The final solution and one line of summary output for each major iteration.

The following printout is sent to the unit number given by the option ‘Monitoring File’:

Output

No output.

The final solution only.

One long line of output ( characters; see Monitoring Information) for each major iteration (no printout of the final solution).

The final solution and one long line of output for each major iteration.

The final solution, one long line of output for each major iteration, matrix statistics (initial status of rows and columns, number of elements, density, biggest and smallest elements, etc.), details of the scale factors resulting from the scaling procedure (if or ), basis factorization statistics and details of the initial basis resulting from the Crash procedure (if ; see Parameters).

If and the unit number defined by the option ‘Monitoring File’ is the advisory unit number, the summary output for each major iteration is suppressed.

‘Major Step Limit’float

Default

If limits the change in during a linesearch. It applies to all nonlinear problems once a ‘feasible solution’ or ‘feasible subproblem’ has been found.

A linesearch determines a step in the interval , where if there are any nonlinear constraints, or the step to the nearest upper or lower bound on if all the constraints are linear. Normally, the first step attempted is .

In some cases, such as or , even a moderate change in the elements of can lead to floating-point overflow. The argument is, therefore, used to define a step limit given by

where is the search direction and the first evaluation of is made at the (potentially) smaller step length .

Wherever possible, upper and lower bounds on should be used to prevent evaluation of nonlinear functions at meaningless points. The default value should not affect progress on well-behaved functions, but values such as or may be helpful when rapidly varying functions are present. If a small value of is selected, a ‘good’ starting point may be required. An important application is to the class of nonlinear least squares problems.

If , the default value is used.

‘Minor Feasibility Tolerance’float

Default

This option attempts to ensure that all variables eventually satisfy their upper and lower bounds to within the tolerance . Since this includes slack variables, general linear constraints should also be satisfied to within . Note that feasibility with respect to nonlinear constraints is judged by the value of option ‘Major Feasibility Tolerance’ and not by .

If the bounds and linear constraints cannot be satisfied to within , the problem is declared infeasible. Let Sinf be the corresponding sum of infeasibilities. If Sinf is quite small, it may be appropriate to raise by a factor of or . Otherwise, some error in the data should be suspected.

If , feasibility is defined in terms of the scaled problem (since it is more likely to be meaningful).

Nonlinear functions will only be evaluated at points that satisfy the bounds and linear constraints. If there are regions where a function is undefined, every effort should be made to eliminate these regions from the problem. For example, if , it is essential to place lower bounds on both and . If the value is used, the bounds and might be appropriate. (The log singularity is more serious; in general, you should attempt to keep as far away from singularities as possible.)

In reality, is used as a feasibility tolerance for satisfying the bounds on and in each QP subproblem. If the sum of infeasibilities cannot be reduced to zero, the QP subproblem is declared infeasible and the function is, then in elastic mode thereafter (with only the linearized nonlinear constraints defined to be elastic). (See also the description of ‘Elastic Weight’.)

If , the default value is used.

‘Feasibility Tolerance’float

Default

This option attempts to ensure that all variables eventually satisfy their upper and lower bounds to within the tolerance . Since this includes slack variables, general linear constraints should also be satisfied to within . Note that feasibility with respect to nonlinear constraints is judged by the value of option ‘Major Feasibility Tolerance’ and not by .

If the bounds and linear constraints cannot be satisfied to within , the problem is declared infeasible. Let Sinf be the corresponding sum of infeasibilities. If Sinf is quite small, it may be appropriate to raise by a factor of or . Otherwise, some error in the data should be suspected.

If , feasibility is defined in terms of the scaled problem (since it is more likely to be meaningful).

Nonlinear functions will only be evaluated at points that satisfy the bounds and linear constraints. If there are regions where a function is undefined, every effort should be made to eliminate these regions from the problem. For example, if , it is essential to place lower bounds on both and . If the value is used, the bounds and might be appropriate. (The log singularity is more serious; in general, you should attempt to keep as far away from singularities as possible.)

In reality, is used as a feasibility tolerance for satisfying the bounds on and in each QP subproblem. If the sum of infeasibilities cannot be reduced to zero, the QP subproblem is declared infeasible and the function is, then in elastic mode thereafter (with only the linearized nonlinear constraints defined to be elastic). (See also the description of ‘Elastic Weight’.)

If , the default value is used.

‘Minor Iteration Limit’int

Default

The value of specifies the maximum number of iterations allowed between successive linearizations of the nonlinear constraints. A value in the range prevents excessive effort being expended on early major iterations, but allows later QP subproblems to be solved to completion. Note that an extra minor iterations are allowed if the first QP subproblem to be solved starts with the all-slack basis . (See the description of the option ‘Crash Option’.)

In general, it is unsafe to specify values as small as or (because even when an optimal solution has been reached, a few minor iterations may be needed for the corresponding QP subproblem to be recognized as optimal).

If , the default value is used.

‘Minor Optimality Tolerance’float

Default

This option is used to judge optimality for each QP subproblem. Let the QP reduced gradients be , where is the th element of the QP gradient, is the associated column of the QP constraint matrix and is the set of QP dual variables.

By construction, the reduced gradients for basic variables are always zero. The QP subproblem will be declared optimal if the reduced gradients for nonbasic variables at their upper or lower bounds satisfy

respectively, and if for superbasic variables.

Note that is a measure of the size of the dual variables. It is included to make the tests independent of a scale factor on the objective function. (The value of actually used is defined in the description for option ‘Major Optimality Tolerance’.)

If the objective is scaled down to be very small, the optimality test reduces to comparing against .

If , the default value is used.

‘Minor Print Level’int

Default

The value of controls the amount of printout produced by the minor iterations of nlp1_sparse_solve (i.e., the iterations of the quadratic programming algorithm), as indicated below. A detailed description of the printed output is given in Minor Iteration Printout (summary output at each minor iteration) and Monitoring Information (monitoring information at each minor iteration). (See also the description of the option ‘Major Print Level’.)

The following printout is sent to the file object associated with the advisory I/O unit (see FileObjManager):

Output

No output.

One line of summary output ( characters; see Minor Iteration Printout) for each minor iteration.

The following printout is sent to the unit number given by the option ‘Monitoring File’:

Output

No output.

One long line of output ( characters; see Monitoring Information) for each minor iteration.

If and the unit number defined by the option ‘Monitoring File’ is the advisory unit number, the summary output for each major iteration is suppressed.

‘Monitoring File’int

Default

If and or and , then monitoring information is produced by nlp1_sparse_solve at every iteration is sent to a file with logical unit number . If and/or and , then no monitoring information is produced.

‘Partial Price’int

Default or

The default value of is if there are any nonlinear constraints and otherwise.

This option is recommended for large problems that have significantly more variables than constraints (i.e., ). It reduces the work required for each ‘pricing’ operation (i.e., when a nonbasic variable is selected to become superbasic). The possible choices for are the following.

Meaning

All columns of the constraint matrix are searched.

Both and are partitioned to give roughly equal segments , for (modulo ). If the previous pricing search was successful on , the next search begins on the segments . If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to enter the basis. If nothing is found, the search continues on the next segments and so on.

If , the default value is used.

‘Pivot Tolerance’float

Default

If , is used during the solution of QP subproblems to prevent columns entering the basis if they would cause the basis to become almost singular.

When changes to for some specified search direction , a ‘ratio test’ is used to determine which element of reaches an upper or lower bound first. The corresponding element of is called the pivot element. Elements of are ignored (and, therefore, cannot be pivot elements) if they are smaller than .

It is common in practice for two (or more) variables to reach a bound at essentially the same time. In such cases, the ‘Minor Feasibility Tolerance’ provides some freedom to maximize the pivot element and thereby improve numerical stability. Excessively small values of ‘Minor Feasibility Tolerance’ should, therefore, not be specified. To a lesser extent, the ‘Expand Frequency’ also provides some freedom to maximize the pivot element. Excessively large values of ‘Expand Frequency’ should, therefore, not be specified.

If , the default value is used.

‘Scale Option’int

Default or

The default value of is if there are any nonlinear constraints and otherwise.

This option enables you to scale the variables and constraints using an iterative procedure due to Fourer (1982), which attempts to compute row scales and column scales such that the scaled matrix coefficients are as close as possible to unity. (The lower and upper bounds on the variables and slacks for the scaled problem are redefined as and respectively, where if .) The possible choices for are the following.

Meaning

0

No scaling is performed. This is recommended if it is known that the elements of and the constraint matrix (along with its Jacobian) never become large (say, ).

1

All linear constraints and variables are scaled. This may improve the overall efficiency of the function on some problems.

2

All constraints and variables are scaled. Also, an additional scaling is performed that takes into account columns of that are fixed or have positive lower bounds or negative upper bounds.

If there are any nonlinear constraints present, the scale factors depend on the Jacobian at the first point that satisfies the linear constraints and the upper and lower bounds. The setting should, therefore, be used only if a ‘good’ starting point is available and the problem is not highly nonlinear.

If or , the default value is used.

‘Scale Tolerance’float

Default

Note that this option does not apply when .

The value () is used to control the number of scaling passes to be made through the constraint matrix . At least (and at most ) passes will be made. More precisely, let denote the largest column ratio (i.e., in some sense) after the th scaling pass through . The scaling procedure is terminated if for some . Thus, increasing the value of from to (say) will probably increase the number of passes through .

If or , the default value is used.

‘Start Objective Check At Column’int

Default

These keywords take effect only if . They may be used to control the verification of gradient elements computed by and/or Jacobian elements computed by . For example, if the first elements of the objective gradient appeared to be correct in an earlier run, so that only element remains questionable, then it is reasonable to specify . Similarly for columns of the Jacobian. If the first variables occur nonlinearly in the constraints but the remaining variables are nonlinear only in the objective, then must set the first elements of the array to zero, but these hardly need to be verified. Again it is reasonable to specify .

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

‘Stop Objective Check At Column’int

Default

These keywords take effect only if . They may be used to control the verification of gradient elements computed by and/or Jacobian elements computed by . For example, if the first elements of the objective gradient appeared to be correct in an earlier run, so that only element remains questionable, then it is reasonable to specify . Similarly for columns of the Jacobian. If the first variables occur nonlinearly in the constraints but the remaining variables are nonlinear only in the objective, then must set the first elements of the array to zero, but these hardly need to be verified. Again it is reasonable to specify .

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

‘Start Constraint Check At Column’int

Default

These keywords take effect only if . They may be used to control the verification of gradient elements computed by and/or Jacobian elements computed by . For example, if the first elements of the objective gradient appeared to be correct in an earlier run, so that only element remains questionable, then it is reasonable to specify . Similarly for columns of the Jacobian. If the first variables occur nonlinearly in the constraints but the remaining variables are nonlinear only in the objective, then must set the first elements of the array to zero, but these hardly need to be verified. Again it is reasonable to specify .

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

‘Stop Constraint Check At Column’int

Default

These keywords take effect only if . They may be used to control the verification of gradient elements computed by and/or Jacobian elements computed by . For example, if the first elements of the objective gradient appeared to be correct in an earlier run, so that only element remains questionable, then it is reasonable to specify . Similarly for columns of the Jacobian. If the first variables occur nonlinearly in the constraints but the remaining variables are nonlinear only in the objective, then must set the first elements of the array to zero, but these hardly need to be verified. Again it is reasonable to specify .

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

If or , the default value is used.

‘Superbasics Limit’int

Default

Note that this option does not apply to linear problems.

It places a limit on the storage allocated for superbasic variables. Ideally, the value of should be set slightly larger than the ‘number of degrees of freedom’ expected at the solution.

For nonlinear problems, the number of degrees of freedom is often called the ‘number of independent variables’. Normally, the value of need not be greater than , but for many problems it may be considerably smaller. (This will save storage if is very large.)

If , the default value is used.

‘Unbounded Objective’float

Default

These options are intended to detect unboundedness in nonlinear problems. During the linesearch, the objective function is evaluated at points of the form , where and are fixed and varies. If exceeds or exceeds , the iterations are terminated and the function returns with = 3.

If singularities are present, unboundedness in may manifest itself by a floating-point overflow during the evaluation of , before the test against can be made.

Unboundedness in is best avoided by placing finite upper and lower bounds on the variables.

If or , the appropriate default value is used.

‘Unbounded Step Size’float

Default

These options are intended to detect unboundedness in nonlinear problems. During the linesearch, the objective function is evaluated at points of the form , where and are fixed and varies. If exceeds or exceeds , the iterations are terminated and the function returns with = 3.

If singularities are present, unboundedness in may manifest itself by a floating-point overflow during the evaluation of , before the test against can be made.

Unboundedness in is best avoided by placing finite upper and lower bounds on the variables.

If or , the appropriate default value is used.

‘Verify Level’int

Default

This option refers to finite difference checks on the gradient elements computed by and . Gradients are verified at the first point that satisfies the linear constraints and the upper and lower bounds. Unspecified gradient elements are not checked and hence they result in no overhead. The possible choices for are the following.

Meaning

No checks are performed.

Only a ‘cheap’ test will be performed, requiring three calls to and two calls to . Note that no checks are carried out if every column of the constraint gradients (Jacobian) contains a missing element.

Individual objective gradient elements will be checked using a reliable (but more expensive) test. If , a key of the form OK or BAD? indicates whether or not each element appears to be correct. If a gradient element is determined to be extremely poor (i.e., if it appears to have no significant digits of accuracy at all), then nlp1_sparse_solve will also exit with an error indicator in argument .

Individual columns of the constraint gradients (Jacobian) will be checked using a reliable (but more expensive) test. If , a key of the form OK or BAD? indicates whether or not each element appears to be correct.

Check both constraint and objective gradients (in that order) as described above for and respectively.

The value should be used whenever a new function function is being developed. The ‘Start Objective Check At Column’ and ‘Stop Objective Check At Column’ keywords may be used to limit the number of nonlinear variables to be checked.

If or , the default value is used.

‘Violation Limit’float

Default

This option defines an absolute limit on the magnitude of the maximum constraint violation after the linesearch. Upon completion of the linesearch, the new iterate satisfies the condition

where is the point at which the nonlinear constraints are first evaluated and is the th nonlinear constraint violation .

The effect of the violation limit is to restrict the iterates to lie in an expanded feasible region whose size depends on the magnitude of . This makes it possible to keep the iterates within a region where the objective function is expected to be well-defined and bounded below (or above in the case of maximization). If the objective function is bounded below (or above in the case of maximization) for all values of the variables, then may be any large positive value.

If , the default value is used.

Raises
NagValueError
(errno )

On entry, .

Constraint: or .

(errno )

On entry, .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On entry, , and .

Constraint: .

(errno )

On entry, and .

Constraint: .

(errno )

On entry, and .

Constraint: .

(errno )

On entry, and .

Constraint: if then .

(errno )

On entry, , and .

Constraint: if then .

(errno )

On entry, .

Constraint: .

(errno )

On entry, , and .

Constraint: .

(errno )

On entry, , and .

Constraint: or .

(errno )

On entry, , and .

Constraint: .

(errno )

On entry, .

Constraint: .

(errno )

On entry, and .

Constraint: for all .

(errno )

On entry, .

Constraint: .

(errno )

On entry, , and .

Constraint: .

(errno )

On entry, .

Constraint: for all .

(errno )

On entry, , and , for .

Constraint: for all .

(errno )

On entry, duplicate element found in row , column .

(errno )

On entry, and .

Constraint: if then for all .

(errno )

On entry, and .

Constraint: if then for all .

(errno )

On entry, and .

Constraint: if then for all .

(errno )

On entry, and .

Constraint: if then for all .

(errno )

On entry, , , and .

Constraint: .

(errno )

On entry, , , and .

Constraint: .

(errno )

On entry, the equal bounds on are infinite, because and , but : and .

(errno )

On entry, the bounds on are inconsistent: and .

(errno )

Function appears to be giving incorrect gradients.

(errno )

Function appears to be giving incorrect gradients.

(errno )

Numerical error in trying to satisfy the linear constraints.

(errno )

Not enough integer workspace for the basis factors.

(errno )

Not enough real workspace for the basis factors.

Warns
NagAlgorithmicWarning
(errno )

Constraint and objective values could not be calculated.

(errno )

User requested termination by setting negative in or .

(errno )

Infeasible problem, nonlinear infeasibilities minimized.

(errno )

No feasible point for the nonlinear constraints.

(errno )

No feasible point for the linear constraints.

(errno )

The problem is unbounded (or badly scaled).

(errno )

Violation Limit exceeded. The problem may be unbounded.

(errno )

Feasible solution, but requested accuracy could not be achieved.

(errno )

Current point cannot be improved upon.

(errno )

The basis is singular after factorization attempts.

(errno )

Not enough integer workspace to start solving the problem.

(errno )

Not enough real workspace to start solving the problem.

NagAlgorithmicMajorWarning
(errno )

Major Iteration Limit exceeded.

(errno )

Minor Iteration Limit exceeded.

(errno )

Iteration Limit exceeded.

(errno )

The value of the option ‘Superbasics Limit’ is too small.

Notes

In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility.

nlp1_sparse_solve is designed to solve a class of nonlinear programming problems that are assumed to be stated in the following general form:

where is a set of variables, is a smooth scalar objective function, and are constant lower and upper bounds, is a vector of smooth nonlinear constraint functions and is a sparse matrix.

The constraints involving and are called the general constraints. Note that upper and lower bounds are specified for all variables and constraints. This form allows full generality in specifying various types of constraint. In particular, the th constraint can be defined as an equality by setting . If certain bounds are not present, the associated elements of or can be set to special values that will be treated as or . (See the description of the option ‘Infinite Bound Size’.)

nlp1_sparse_solve converts the upper and lower bounds on the elements of and to equalities by introducing a set of slack variables , where . For example, the linear constraint is replaced by , together with the bounded slack . The problem defined by (1) can, therefore, be re-written in the following equivalent form:

Since the slack variables are subject to the same upper and lower bounds as the elements of and , the bounds on and can simply be thought of as bounds on the combined vector . The elements of and are partitioned into basic, nonbasic and superbasic variables defined as follows:

  • a basic variable ( say) is the th variable associated with the th column of the basis matrix ;

  • a nonbasic variable is a variable that is temporarily fixed at its current value (usually its upper or lower bound);

  • a superbasic variable is a nonbasic variable which is not at one of its bounds that is free to move in any desired direction (namely one that will improve the value of the objective function or reduce the sum of infeasibilities).

For example, in the simplex method (see Gill et al. (1981)) the elements of can be partitioned at each vertex into a set of basic variables (all non-negative) and a set of nonbasic variables (all zero). This is equivalent to partitioning the columns of the constraint matrix as , where contains the columns that correspond to the basic variables and contains the columns that correspond to the nonbasic variables. Note that is square and nonsingular.

The option ‘Maximize’ may be used to specify an alternative problem in which is maximized. If the objective function is nonlinear and all the constraints are linear, is absent and the problem is said to be linearly constrained. In general, the objective and constraint functions are structured in the sense that they are formed from sums of linear and nonlinear functions. This structure can be exploited by the function during the solution process as follows.

Consider the following nonlinear optimization problem with four variables ():

subject to the constraints

and to the bounds

This problem has several characteristics that can be exploited by the function:

  • the objective function is nonlinear. It is the sum of a nonlinear function of the variables () and a linear function of the variables ();

  • the first two constraints are nonlinear. The third is linear;

  • each nonlinear constraint function is the sum of a nonlinear function of the variables () and a linear function of the variables ().

The nonlinear terms are defined by and (see Parameters), which involve only the appropriate subset of variables.

For the objective, we define the function to include only the nonlinear part of the objective. The three variables () associated with this function are known as the nonlinear objective variables. The number of them is given by (see Parameters) and they are the only variables needed in . The linear part of the objective is stored in row (see Parameters) of the (constraint) Jacobian matrix (see below).

Thus, if and denote the nonlinear and linear objective variables, respectively, the objective may be re-written in the form

where is the nonlinear part of the objective and and are constant vectors that form a row of . In this example, and .

Similarly for the constraints, we define a vector function to include just the nonlinear terms. In this example, and , where the two variables () are known as the nonlinear Jacobian variables. The number of them is given by (see Parameters) and they are the only variables needed in . Thus, if and denote the nonlinear and linear Jacobian variables, respectively, the constraint functions and the linear part of the objective have the form

where and in this example. This ensures that the Jacobian is of the form

where . Note that always appears in the top left-hand corner of .

The inequalities and implied by the constraint functions in (3) are known as the nonlinear and linear constraints, respectively. The nonlinear constraint vector in (3) and (optionally) its partial derivative matrix are set in . The matrices , and contain any (constant) linear terms. Along with the sparsity pattern of they are stored in the arrays , and (see Parameters).

In general, the vectors and have different dimensions, but they always overlap, in the sense that the shorter vector is always the beginning of the other. In the above example, the nonlinear Jacobian variables are an ordered subset of the nonlinear objective variables . In other cases it could be the other way round (whichever is the most convenient), but the first way keeps as small as possible.

Note that the nonlinear objective function may involve either a subset or superset of the variables appearing in the nonlinear constraint functions . Thus, (or vice-versa). Sometimes the objective and constraints really involve disjoint sets of nonlinear variables. In such cases the variables should be ordered so that and , where the objective is nonlinear in just the last vector . The first elements of the gradient array should also be set to zero in .

If all elements of the constraint Jacobian are known (i.e., the option or ), any constant elements may be assigned their correct values in , and . The corresponding elements of the constraint Jacobian array need not be reset in . This includes values that are identically zero as constraint Jacobian elements are assumed to be zero unless specified otherwise. It must be emphasized that, if or , unassigned elements of are not treated as constant; they are estimated by finite differences, at nontrivial expense.

If there are no nonlinear constraints in (1) and is linear or quadratic, then it may be more efficient to use qpconvex2_sparse_solve() to solve the resulting linear or quadratic programming problem, or one of lp_solve(), lsq_lincon_solve() or qp_dense_solve() if is a dense matrix. If the problem is dense and does have nonlinear constraints then one of nlp2_solve(), nlp1_rcomm() or lsq_gencon_deriv() (as appropriate) should be used instead.

You must supply an initial estimate of the solution to (1), together with versions of and that define and , respectively, and as many first partial derivatives as possible. Note that if there are any nonlinear constraints, then the first call to will precede the first call to .

nlp1_sparse_solve is based on the SNOPT package described in Gill et al. (2002), which in turn utilizes functions from the MINOS package (see Murtagh and Saunders (1995)). It incorporates a Sequential Quadratic Programming (SQP) method that obtains search directions from a sequence of Quadratic Programming (QP) subproblems. Each QP subproblem minimizes a quadratic model of a certain Lagrangian function subject to a linearization of the constraints. An augmented Lagrangian merit function is reduced along each search direction to ensure convergence from any starting point. Further details can be found in Algorithmic Details.

Throughout this document the symbol is used to represent the machine precision (see machine.precision).

References

Conn, A R, 1973, Constrained optimization using a nondifferentiable penalty function, SIAM J. Numer. Anal. (10), 760–779

Eldersveld, S K, 1991, Large-scale sequential quadratic programming algorithms, PhD Thesis, Department of Operations Research, Stanford University, Stanford

Fletcher, R, 1984, An penalty method for nonlinear constraints, Numerical Optimization 1984, (eds P T Boggs, R H Byrd and R B Schnabel), 26–40, SIAM Philadelphia

Fourer, R, 1982, Solving staircase linear programs by the simplex method, Math. Programming (23), 274–313

Gill, P E, Murray, W and Saunders, M A, 2002, SNOPT: An SQP Algorithm for Large-scale Constrained Optimization (12), 979–1006, SIAM J. Optim.

Gill, P E, Murray, W, Saunders, M A and Wright, M H, 1986, Users’ guide for NPSOL (Version 4.0): a Fortran package for nonlinear programming, Report SOL 86-2, Department of Operations Research, Stanford University

Gill, P E, Murray, W, Saunders, M A and Wright, M H, 1989, A practical anti-cycling procedure for linearly constrained optimization, Math. Programming (45), 437–474

Gill, P E, Murray, W, Saunders, M A and Wright, M H, 1992, Some theoretical properties of an augmented Lagrangian merit function, Advances in Optimization and Parallel Computing, (ed P M Pardalos), 101–128, North Holland

Gill, P E, Murray, W and Wright, M H, 1981, Practical Optimization, Academic Press

Hock, W and Schittkowski, K, 1981, Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems (187), Springer–Verlag

Murtagh, B A and Saunders, M A, 1995, MINOS 5.4 users’ guide, Report SOL 83-20R, Department of Operations Research, Stanford University

Ortega, J M and Rheinboldt, W C, 1970, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press

Powell, M J D, 1974, Introduction to constrained optimization, Numerical Methods for Constrained Optimization, (eds P E Gill and W Murray), 1–28, Academic Press