naginterfaces.library.opt.qpconvex1_sparse_solve¶
- naginterfaces.library.opt.qpconvex1_sparse_solve(m, iobj, ncolh, a, ha, ka, bl, bu, start, names, crname, ns, xs, istate, leniz, lenz, comm, qphx=None, data=None, io_manager=None)[source]¶
qpconvex1_sparse_solve
solves sparse linear programming or convex quadratic programming problems.Note: this function uses optional algorithmic parameters, see also:
qpconvex1_sparse_option_file()
,qpconvex1_sparse_option_string()
,nlp1_init()
.For full information please refer to the NAG Library document for e04nk
https://support.nag.com/numeric/nl/nagdoc_30.3/flhtml/e04/e04nkf.html
- Parameters
- mint
, the number of general linear constraints (or slacks). This is the number of rows in , including the free row (if any; see ).
- iobjint
If , row of is a free row containing the nonzero elements of the vector appearing in the linear objective term .
If , there is no free row, i.e., the problem is either an FP problem (in which case must be set to zero), or a QP problem with .
- ncolhint
, the number of leading nonzero columns of the Hessian matrix . For FP and LP problems, must be set to zero.
- afloat, array-like, shape
The nonzero elements of , ordered by increasing column index. Note that elements with the same row and column indices are not allowed.
- haint, array-like, shape
must contain the row index of the nonzero element stored in , for . Note that the row indices for a column may be supplied in any order.
- kaint, array-like, shape
must contain the index in of the start of the th column, for . must be set to . To specify the th column as empty, set . As a consequence is always .
- blfloat, array-like, shape
, the lower bounds for all the variables and general constraints, in the following order. The first elements of must contain the bounds on the variables , and the next elements the bounds for the general linear constraints (or slacks ) and the free row (if any). To specify a nonexistent lower bound (i.e., ), set , where is the value of the option ‘Infinite Bound Size’. To specify the th constraint as an equality, set , say, where . Note that the lower bound corresponding to the free row must be set to and stored in .
- bufloat, array-like, shape
, the upper bounds for all the variables and general constraints, in the following order. The first elements of must contain the bounds on the variables , and the next elements the bounds for the general linear constraints (or slacks ) and the free row (if any). To specify a nonexistent upper bound (i.e., ), set . Note that the upper bound corresponding to the free row must be set to and stored in .
- startstr, length 1
Indicates how a starting basis is to be obtained.
An internal Crash procedure will be used to choose an initial basis matrix .
A basis is already defined in (probably from a previous call).
- namesstr, length 8, array-like, shape
A set of names associated with the so-called MPSX form of the problem, as follows:
Must contain the name for the problem (or be blank).
Must contain the name for the free row (or be blank).
Must contain the name for the constraint right-hand side (or be blank).
Must contain the name for the ranges (or be blank).
Must contain the name for the bounds (or be blank).
(These names are used in the monitoring file output; see Monitoring Information.)
- crnamestr, length 8, array-like, shape
The optional column and row names, respectively.
If , is not referenced and the printed output will use default names for the columns and rows.
If , the first elements must contain the names for the columns and the next elements must contain the names for the rows.
Note that the name for the free row (if any) must be stored in .
- nsint
, the number of superbasics. For QP problems, need not be specified if , but must retain its value from a previous call when . For FP and LP problems, need not be initialized.
- xsfloat, array-like, shape
The initial values of the variables and slacks . (See the description for .)
- istateint, array-like, shape
If , the first elements of and must specify the initial states and values, respectively, of the variables . (The slacks need not be initialized.) An internal Crash procedure is then used to select an initial basis matrix . The initial basis matrix will be triangular (neglecting certain small elements in each column). It is chosen from various rows and columns of . Possible values for are as follows:
State of during Crash procedure
or
Eligible for the basis
Ignored
Eligible for the basis (given preference over or )
or
Ignored
If nothing special is known about the problem, or there is no wish to provide special information, you may set and , for .
All variables will then be eligible for the initial basis.
Less trivially, to say that the th variable will probably be equal to one of its bounds, set and or and as appropriate.
Following the Crash procedure, variables for which are made superbasic.
Other variables not selected for the basis are then made nonbasic at the value if , or at the value or closest to .
If , and must specify the initial states and values, respectively, of the variables and slacks .
If
qpconvex1_sparse_solve
has been called previously with the same values of and , already contains satisfactory information.- lenizint
The dimension of the internal workspace array .
- lenzint
The dimension of the internal workspace array .
- commdict, communication object, modified in place
Communication structure.
This argument must have been initialized by a prior call to
nlp1_init()
.- qphxNone or callable hx = qphx(nstate, x, data=None), optional
Note: if this argument is None then a NAG-supplied facility will be used.
For QP problems, you must supply a version of to compute the matrix product .
If has zero rows and columns, it is most efficient to order the variables so that
where the nonlinear variables appear first as shown.
For FP and LP problems, will never be called by
qpconvex1_sparse_solve
and hence may be None.- Parameters
- nstateint
If ,
qpconvex1_sparse_solve
is calling for the first time. This argument setting allows you to save computation time if certain data must be read or calculated only once.If ,
qpconvex1_sparse_solve
is calling for the last time.This argument setting allows you to perform some additional computation on the final solution.
In general, the last call to is made with (see Exceptions).
Otherwise, .
- xfloat, ndarray, shape
The first elements of the vector .
- dataarbitrary, optional, modifiable in place
User-communication data for callback functions.
- Returns
- hxfloat, array-like, shape
The product .
- dataarbitrary, optional
User-communication data for callback functions.
- io_managerFileObjManager, optional
Manager for I/O in this routine.
- Returns
- nsint
The final number of superbasics. This will be zero for FP and LP problems.
- xsfloat, ndarray, shape
The final values of the variables and slacks .
- istateint, ndarray, shape
The final states of the variables and slacks . The significance of each possible value of is as follows:
State of variable
Normal value of
Nonbasic
Nonbasic
Superbasic
Between and
Basic
Between and
If , basic and superbasic variables may be outside their bounds by as much as the value of the option ‘Feasibility Tolerance’.
Note that unless the is specified, the option ‘Feasibility Tolerance’ applies to the variables of the scaled problem.
In this case, the variables of the original problem may be as much as outside their bounds, but this is unlikely unless the problem is very badly scaled.
Very occasionally some nonbasic variables may be outside their bounds by as much as the option ‘Feasibility Tolerance’, and there may be some nonbasic variables for which lies strictly between its bounds.
If , some basic and superbasic variables may be outside their bounds by an arbitrary amount (bounded by if ).
- minizint
The minimum value of required to start solving the problem. If = 12,
qpconvex1_sparse_solve
may be called again with suitably larger than . (The bigger the better, since it is not certain how much workspace the basis factors need.)- minzint
The minimum value of required to start solving the problem. If = 13,
qpconvex1_sparse_solve
may be called again with suitably larger than . (The bigger the better, since it is not certain how much workspace the basis factors need.)- ninfint
The number of infeasibilities. This will be zero if the function exits successfully or = 1.
- sinffloat
The sum of infeasibilities. This will be zero if . (Note that
qpconvex1_sparse_solve
does not attempt to compute the minimum value of if = 3.)- objfloat
The value of the objective function.
If , includes the quadratic objective term (if any).
If , is just the linear objective term (if any).
For FP problems, is set to zero.
- clamdafloat, ndarray, shape
A set of Lagrange multipliers for the bounds on the variables and the general constraints. More precisely, the first elements contain the multipliers (reduced costs) for the bounds on the variables, and the next elements contain the multipliers (shadow prices) for the general linear constraints.
- Other Parameters
- ‘Check Frequency’int
Default
Every th iteration after the most recent basis factorization, a numerical test is made to see if the current solution satisfies the linear constraints . If the largest element of the residual vector is judged to be too large, the current basis is refactorized and the basic variables recomputed to satisfy the constraints more accurately. If , the default value is used. If , the value is used and effectively no checks are made.
- ‘Crash Option’int
Default
Note that this option does not apply when (see Parameters).
If , an internal Crash procedure is used to select an initial basis from various rows and columns of the constraint matrix . The value of determines which rows and columns are initially eligible for the basis, and how many times the Crash procedure is called. If , the all-slack basis is chosen. If , the Crash procedure is called once (looking for a triangular basis in all rows and columns of the linear constraint matrix ). If , the Crash procedure is called twice (looking at any equality constraints first followed by any inequality constraints). If or , the default value is used.
If or , certain slacks on inequality rows are selected for the basis first. (If , numerical values are used to exclude slacks that are close to a bound.) The Crash procedure then makes several passes through the columns of , searching for a basis matrix that is essentially triangular. A column is assigned to ‘pivot’ on a particular row if the column contains a suitably large element in a row that has not yet been assigned. (The pivot elements ultimately form the diagonals of the triangular basis.) For remaining unassigned rows, slack variables are inserted to complete the basis.
- ‘Crash Tolerance’float
Default
This value allows the Crash procedure to ignore certain ‘small’ nonzero elements in the constraint matrix while searching for a triangular basis. For each column of , if is the largest element in the column, other nonzeros in that column are ignored if they are less than (or equal to) .
When , the basis obtained by the Crash procedure may not be strictly triangular, but it is likely to be nonsingular and almost triangular. The intention is to obtain a starting basis with more column variables and fewer (arbitrary) slacks. A feasible solution may be reached earlier for some problems. If or , the default value is used.
- ‘Defaults’valueless
This special keyword may be used to reset all options to their default values.
- ‘Expand Frequency’int
Default
This option is part of an anti-cycling procedure (see Miscellaneous) designed to allow progress even on highly degenerate problems.
For LP problems, the strategy is to force a positive step at every iteration, at the expense of violating the constraints by a small amount. Suppose that the value of the option ‘Feasibility Tolerance’ is . Over a period of iterations, the feasibility tolerance actually used by
qpconvex1_sparse_solve
(i.e., the working feasibility tolerance) increases from to (in steps of ).For QP problems, the same procedure is used for iterations in which there is only one superbasic variable. (Cycling can only occur when the current solution is at a vertex of the feasible region.) Thus, zero steps are allowed if there is more than one superbasic variable, but otherwise positive steps are enforced.
Increasing the value of helps reduce the number of slightly infeasible nonbasic basic variables (most of which are eliminated during the resetting procedure). However, it also diminishes the freedom to choose a large pivot element (see option ‘Pivot Tolerance’).
If , the default value is used. If , the value is used and effectively no anti-cycling procedure is invoked.
- ‘Factorization Frequency’int
Default
If , at most basis changes will occur between factorizations of the basis matrix. For LP problems, the basis factors are usually updated at every iteration. For QP problems, fewer basis updates will occur as the solution is approached. The number of iterations between basis factorizations will, therefore, increase. During these iterations a test is made regularly according to the value of option ‘Check Frequency’ to ensure that the linear constraints are satisfied. If necessary, the basis will be refactorized before the limit of updates is reached. If , the default value is used.
- ‘Feasibility Tolerance’float
Default
If , defines the maximum acceptable absolute violation in each constraint at a ‘feasible’ point (including slack variables). For example, if the variables and the coefficients in the linear constraints are of order unity, and the latter are correct to about five decimal digits, it would be appropriate to specify as . If , the default value is used.
qpconvex1_sparse_solve
attempts to find a feasible solution before optimizing the objective function. If the sum of infeasibilities cannot be reduced to zero, the problem is assumed to be infeasible. Let Sinf be the corresponding sum of infeasibilities. If Sinf is quite small, it may be appropriate to raise by a factor of or . Otherwise, some error in the data should be suspected. Note that the function does not attempt to find the minimum value of Sinf.If the constraints and variables have been scaled (see ‘Scale Option’), then feasibility is defined in terms of the scaled problem (since it is more likely to be meaningful).
- ‘Infinite Bound Size’float
Default
If , defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to will be regarded as (and similarly any lower bound less than or equal to will be regarded as ). If , the default value is used.
- ‘Infinite Step Size’float
Default
If , specifies the magnitude of the change in variables that will be considered a step to an unbounded solution. (Note that an unbounded solution can occur only when the Hessian is not positive definite.) If the change in during an iteration would exceed the value of , the objective function is considered to be unbounded below in the feasible region. If , the default value is used.
- ‘Iteration Limit’int
Default
The value of specifies the maximum number of iterations allowed before termination. Setting and means that the workspace needed to start solving the problem will be computed and printed, but no iterations will be performed. If , the default value is used.
- ‘Iters’int
Default
The value of specifies the maximum number of iterations allowed before termination. Setting and means that the workspace needed to start solving the problem will be computed and printed, but no iterations will be performed. If , the default value is used.
- ‘Itns’int
Default
The value of specifies the maximum number of iterations allowed before termination. Setting and means that the workspace needed to start solving the problem will be computed and printed, but no iterations will be performed. If , the default value is used.
- ‘List’valueless
Option ‘List’ enables printing of each option specification as it is supplied. ‘Nolist’ suppresses this printing.
- ‘Nolist’valueless
Default
Option ‘List’ enables printing of each option specification as it is supplied. ‘Nolist’ suppresses this printing.
- ‘LU Factor Tolerance’float
Default
The values of and affect the stability and sparsity of the basis factorization , during refactorization and updates respectively. The lower triangular matrix is a product of matrices of the form
where the multipliers will satisfy . The default values of and usually strike a good compromise between stability and sparsity. For large and relatively dense problems, setting and to (say) may give a marked improvement in sparsity without impairing stability to a serious degree.
Note that for band matrices it may be necessary to set in the range in order to achieve stability. If or , the default value is used.
- ‘LU Update Tolerance’float
Default
The values of and affect the stability and sparsity of the basis factorization , during refactorization and updates respectively. The lower triangular matrix is a product of matrices of the form
where the multipliers will satisfy . The default values of and usually strike a good compromise between stability and sparsity. For large and relatively dense problems, setting and to (say) may give a marked improvement in sparsity without impairing stability to a serious degree.
Note that for band matrices it may be necessary to set in the range in order to achieve stability. If or , the default value is used.
- ‘LU Singularity Tolerance’float
Default
If , defines the singularity tolerance used to guard against ill-conditioned basis matrices. Whenever the basis is refactorized, the diagonal elements of are tested as follows. If or , the th column of the basis is replaced by the corresponding slack variable. If , the default value is used.
- ‘Minimize’valueless
Default
This option specifies the required direction of the optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes and the other maximizes , their solutions will be the same but the signs of the dual variables and the reduced gradients (see Main Iteration) will be reversed.
- ‘Maximize’valueless
This option specifies the required direction of the optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes and the other maximizes , their solutions will be the same but the signs of the dual variables and the reduced gradients (see Main Iteration) will be reversed.
- ‘Monitoring File’int
Default
If and (see ‘Print Level’), monitoring information produced by
qpconvex1_sparse_solve
is sent to a file with logical unit number . If and/or , the default value is used and hence no monitoring information is produced.- ‘Optimality Tolerance’float
Default
If , is used to judge the size of the reduced gradients . By definition, the reduced gradients for basic variables are always zero. Optimality is declared if the reduced gradients for any nonbasic variables at their lower or upper bounds satisfy , and if for any superbasic variables. If , the default value is used.
- ‘Partial Price’int
Default
Note that this option does not apply to QP problems.
This option is recommended for large FP or LP problems that have significantly more variables than constraints (i.e., ). It reduces the work required for each pricing operation (i.e., when a nonbasic variable is selected to enter the basis). If , all columns of the constraint matrix are searched. If , and are partitioned to give roughly equal segments , for (modulo ). If the previous pricing search was successful on , the next search begins on the segments . If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to enter the basis. If nothing is found, the search continues on the next segments , and so on. If , the default value is used.
- ‘Pivot Tolerance’float
Default
If , is used to prevent columns entering the basis if they would cause the basis to become almost singular. If , the default value is used.
- ‘Print Level’int
Default
The value of controls the amount of printout produced by
qpconvex1_sparse_solve
, as indicated below. A detailed description of the printed output is given in Further Comments (summary output at each iteration and the final solution) and Monitoring Information (monitoring information at each iteration). Note that the summary output will not exceed characters per line and that the monitoring information will not exceed characters per line. If , the default value is used.The following printout is sent to the file object associated with the advisory I/O unit (see
FileObjManager
):Output
No output.
The final solution only.
One line of summary output for each iteration (no printout of the final solution).
The final solution and one line of summary output for each iteration.
The following printout is sent to the logical unit number defined by the option ‘Monitoring File’:
Output
No output.
The final solution only.
One long line of output for each iteration (no printout of the final solution).
The final solution and one long line of output for each iteration.
The final solution, one long line of output for each iteration, matrix statistics (initial status of rows and columns, number of elements, density, biggest and smallest elements, etc.), details of the scale factors resulting from the scaling procedure (if or (see the description of the option ‘Scale Option’), basis factorization statistics and details of the initial basis resulting from the Crash procedure (if ; see Parameters).
If and the unit number defined by option ‘Monitoring File’ is the advisory unit number, then the summary output is suppressed.
- ‘Rank Tolerance’float
Default
See above.
- ‘Scale Option’int
Default
This option enables you to scale the variables and constraints using an iterative procedure due to Fourer (1982), which attempts to compute row scales and column scales such that the scaled matrix coefficients are as close as possible to unity. This may improve the overall efficiency on some problems. (The lower and upper bounds on the variables and slacks for the scaled problem are redefined as and respectively, where if .)
If , no scaling is performed. If , all rows and columns of the constraint matrix are scaled. If , an additional scaling is performed that may be helpful when the solution is large; it takes into account columns of that are fixed or have positive lower bounds or negative upper bounds. If or , the default value is used.
- ‘Scale Tolerance’float
Default
Note that this option does not apply when .
If , is used to control the number of scaling passes to be made through the constraint matrix . At least (and at most ) passes will be made. More precisely, let denote the largest column ratio (i.e., in some sense) after the th scaling pass through . The scaling procedure is terminated if for some . Thus, increasing the value of from to (say) will probably increase the number of passes through .
If or , the default value is used.
- ‘Superbasics Limit’int
Default
Note that this option does not apply to FP or LP problems.
The value of specifies ‘how nonlinear’ you expect the QP problem to be. If , the default value is used.
- Raises
- NagValueError
- (errno )
Too many iterations.
- (errno )
Reduced Hessian matrix exceeds its assigned dimension.
- (errno )
Hessian matrix appears to be indefinite.
- (errno )
On entry, and .
Constraint: if then for all .
- (errno )
On entry, and .
Constraint: if then for all .
- (errno )
On entry, and .
Constraint: if then for all .
- (errno )
On entry, the bounds on are inconsistent: and .
- (errno )
On entry, the equal bounds on are infinite, because and , but : and .
- (errno )
On entry, duplicate element found in row , column .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, , and , for .
Constraint: for all .
- (errno )
On entry, .
Constraint: for all .
- (errno )
On entry, , and .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, and .
Constraint: for all .
- (errno )
On entry, , and .
Constraint: or .
- (errno )
On entry, , , and .
Constraint: .
- (errno )
On entry, , , and .
Constraint: .
- (errno )
On entry, and .
Constraint: .
- (errno )
On entry, and .
Constraint: .
- (errno )
On entry, , and .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, .
Constraint: or .
- (errno )
Cannot satisfy the general constraints.
- (errno )
Not enough integer workspace for the basis factors.
- (errno )
Not enough real workspace for the basis factors.
- (errno )
The basis is singular after factorization attempts.
- (errno )
Error in basis package. Please contact NAG.
- (errno )
System error. Wrong number of basic variables. Please contact NAG.
- Warns
- NagAlgorithmicWarning
- (errno )
Weak solution found.
- (errno )
problem is unbounded (or badly scaled).
- (errno )
problem is infeasible.
- (errno )
Not enough integer workspace to start solving the problem.
- (errno )
Not enough real workspace to start solving the problem.
- Notes
In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility.
qpconvex1_sparse_solve
is designed to solve a class of quadratic programming problems that are assumed to be stated in the following general form:where is a set of variables, is an matrix and the objective function may be specified in a variety of ways depending upon the particular problem to be solved. The option ‘Maximize’ may be used to specify an alternative problem in which is maximized. The possible forms for are listed in Table [label omitted], in which the prefixes FP, LP and QP stand for ‘feasible point’, ‘linear programming’ and ‘quadratic programming’ respectively, is an -element vector and is the second-derivative matrix (the Hessian matrix).
Problem type
Objective function
Hessian matrix
FP
Not applicable
Not applicable
LP
Not applicable
QP
Symmetric positive semidefinite
For LP and QP problems, the unique global minimum value of is found. For FP problems, is omitted and the function attempts to find a feasible point for the set of constraints. For QP problems, you must also provide a function that computes for any given vector . ( need not be stored explicitly.) If is the zero matrix, the function will still solve the resulting LP problem; however, this can be accomplished more efficiently by setting (see Parameters).
The defining feature of a convex QP problem is that the matrix must be positive semidefinite, i.e., it must satisfy for all . Otherwise, is said to be nonconvex and it may be more appropriate to call
handle_solve_ssqp()
instead.qpconvex1_sparse_solve
is intended to solve large-scale linear and quadratic programming problems in which the constraint matrix is sparse (i.e., when the number of zero elements is sufficiently large that it is worthwhile using algorithms which avoid computations and storage involving zero elements). The function also takes advantage of sparsity in . (Sparsity in can be exploited in the function that computes .) For problems in which can be treated as a dense matrix, it is usually more efficient to uselp_solve()
,lsq_lincon_solve()
orqp_dense_solve()
.The upper and lower bounds on the elements of are said to define the general constraints of the problem. Internally,
qpconvex1_sparse_solve
converts the general constraints to equalities by introducing a set of slack variables , where . For example, the linear constraint is replaced by , together with the bounded slack . The problem defined by (1) can, therefore, be re-written in the following equivalent form:Since the slack variables are subject to the same upper and lower bounds as the elements of , the bounds on and can simply be thought of as bounds on the combined vector . (In order to indicate their special role in QP problems, the original variables are sometimes known as ‘column variables’, and the slack variables are known as ‘row variables’.)
Each LP or QP problem is solved using an active-set method. This is an iterative procedure with two phases: a feasibility phase, in which the sum of infeasibilities is minimized to find a feasible point; and an optimality phase, in which is minimized by constructing a sequence of iterations that lies within the feasible region.
A constraint is said to be active or binding at if the associated element of either or is equal to one of its upper or lower bounds. Since an active constraint in has its associated slack variable at a bound, the status of both simple and general upper and lower bounds can be conveniently described in terms of the status of the variables . A variable is said to be nonbasic if it is temporarily fixed at its upper or lower bound. It follows that regarding a general constraint as being active is equivalent to thinking of its associated slack as being nonbasic.
At each iteration of an active-set method, the constraints are (conceptually) partitioned into the form
where consists of the nonbasic elements of and the basis matrix is square and nonsingular. The elements of and are called the basic and superbasic variables respectively; with they are a permutation of the elements of and . At a QP solution, the basic and superbasic variables will lie somewhere between their upper or lower bounds, while the nonbasic variables will be equal to one of their bounds. At each iteration, is regarded as a set of independent variables that are free to move in any desired direction, namely one that will improve the value of the objective function (or sum of infeasibilities). The basic variables are then adjusted in order to ensure that continues to satisfy . The number of superbasic variables ( say), therefore, indicates the number of degrees of freedom remaining after the constraints have been satisfied. In broad terms, is a measure of how nonlinear the problem is. In particular, will always be zero for FP and LP problems.
If it appears that no improvement can be made with the current definition of , and , a nonbasic variable is selected to be added to , and the process is repeated with the value of increased by one. At all stages, if a basic or superbasic variable encounters one of its bounds, the variable is made nonbasic and the value of is decreased by one.
Associated with each of the equality constraints is a dual variable . Similarly, each variable in has an associated reduced gradient (also known as a reduced cost). The reduced gradients for the variables are the quantities , where is the gradient of the QP objective function; and the reduced gradients for the slack variables are the dual variables . The QP subproblem is optimal if for all nonbasic variables at their lower bounds, for all nonbasic variables at their upper bounds and for all superbasic variables. In practice, an approximate QP solution is found by slightly relaxing these conditions on (see the description of the option ‘Optimality Tolerance’).
The process of computing and comparing reduced gradients is known as pricing (a term first introduced in the context of the simplex method for linear programming). To ‘price’ a nonbasic variable means that the reduced gradient associated with the relevant active upper or lower bound on is computed via the formula , where is the th column of . (The variable selected by such a process and the corresponding value of (i.e., its reduced gradient) are the quantities +S and dj in the monitoring file output; see Monitoring Information.) If has significantly more columns than rows (i.e., ), pricing can be computationally expensive. In this case, a strategy known as partial pricing can be used to compute and compare only a subset of the ’s.
qpconvex1_sparse_solve
is based on SQOPT, which is part of the SNOPT package described in Gill et al. (2002), which in turn utilizes functions from the MINOS package (see Murtagh and Saunders (1995)). It uses stable numerical methods throughout and includes a reliable basis package (for maintaining sparse factors of the basis matrix ), a practical anti-degeneracy procedure, efficient handling of linear constraints and bounds on the variables (by an active-set strategy), as well as automatic scaling of the constraints. Further details can be found in Algorithmic Details.
- References
Fourer, R, 1982, Solving staircase linear programs by the simplex method, Math. Programming (23), 274–313
Gill, P E and Murray, W, 1978, Numerically stable methods for quadratic programming, Math. Programming (14), 349–372
Gill, P E, Murray, W and Saunders, M A, 2002, SNOPT: An SQP Algorithm for Large-scale Constrained Optimization (12), 979–1006, SIAM J. Optim.
Gill, P E, Murray, W, Saunders, M A and Wright, M H, 1987, Maintaining factors of a general sparse matrix, Linear Algebra and its Applics. (88/89), 239–270
Gill, P E, Murray, W, Saunders, M A and Wright, M H, 1989, A practical anti-cycling procedure for linearly constrained optimization, Math. Programming (45), 437–474
Gill, P E, Murray, W, Saunders, M A and Wright, M H, 1991, Inertia-controlling methods for general quadratic programming, SIAM Rev. (33), 1–36
Hall, J A J and McKinnon, K I M, 1996, The simplest examples where the simplex method cycles and conditions where EXPAND fails to prevent cycling, Report MS, 96–100, Department of Mathematics and Statistics, University of Edinburgh
Murtagh, B A and Saunders, M A, 1995, MINOS 5.4 users’ guide, Report SOL 83-20R, Department of Operations Research, Stanford University