naginterfaces.library.opt.handle_​solve_​pennon

naginterfaces.library.opt.handle_solve_pennon(handle, x, inform, u=None, uc=None, ua=None, io_manager=None)[source]

handle_solve_pennon is a solver from the NAG optimization modelling suite for problems such as, Quadratic Programming (QP), linear Semidefinite Programming (SDP) and semidefinite programming with bilinear matrix inequalities (BMI-SDP).

Note: this function uses optional algorithmic parameters, see also: handle_opt_set(), handle_opt_get().

For full information please refer to the NAG Library document for e04sv

https://support.nag.com/numeric/nl/nagdoc_30/flhtml/e04/e04svf.html

Parameters
handleHandle

The handle to the problem. It needs to be initialized (e.g., by handle_init()) and to hold a problem formulation compatible with handle_solve_pennon. It must not be changed between calls to the NAG optimization modelling suite.

xfloat, array-like, shape

Note: intermediate stops take place only if .

If (the default), , the initial estimate of the variables ; otherwise, need not be set.

On intermediate entry: the input is ignored.

informint

Note: intermediate stops take place only if .

On initial entry: no effect.

On intermediate entry: if set to , solving the current problem is terminated and the function returns = 20; otherwise, the function continues.

uNone or float, array-like, shape , optional

Note: intermediate stops take place only if .

Note: if , holds Lagrangian multipliers (dual variables) for (standard) constraints, i.e., simple bounds defined by handle_set_simplebounds() and a set of linear constraints defined by handle_set_linconstr(). Either their initial estimates, intermediate approximations or final values, see Structure of the Lagrangian Multipliers.

Note: if , will not be referenced.

If (the default is ‘AUTOMATIC’), , the initial estimate of the Lagrangian multipliers ; otherwise, need not be set.

On intermediate entry: the input is ignored.

ucNone or float, array-like, shape , optional

is reserved for future releases of the NAG Library which will allow definition of second-order cone constraints.

It is not referenced at the moment.

uaNone or float, array-like, shape , optional

Note: intermediate stops take place only if .

If , holds the Lagrangian multipliers for matrix constraints defined by handle_set_linmatineq() and handle_set_quadmatineq(), see Structure of the Lagrangian Multipliers.

If , will not be referenced.

If (the default is ‘AUTOMATIC’), , the initial estimate of the matrix Lagrangian multipliers ; otherwise, need not be set.

On intermediate entry: the input is ignored.

io_managerFileObjManager, optional

Manager for I/O in this routine.

Returns
xfloat, ndarray, shape

On intermediate exit: the value of the variables at the end of the current outer iteration.

On final exit: the final value of the variables .

ufloat, ndarray, shape

On intermediate exit: the estimate of the multipliers at the end of the current outer iteration.The final value of multipliers .

ucfloat, ndarray, shape

is reserved for future releases of the NAG Library which will allow definition of second-order cone constraints.

It is not referenced at the moment.

uafloat, ndarray, shape

On intermediate exit: the estimate of the matrix multipliers at the end of the outer iteration.

On final exit: the final estimate of the multipliers .

rinfofloat, ndarray, shape

On intermediate or final entry: error measures and various indicators (see Algorithmic Details for details) at the end of the current (or final) outer iteration as given in the table below:

objective function value

optimality (2)

feasibility (3)

complementarity (4)

minimum penalty

relative precision (1)

relative duality gap (0)

precision

duality gap

minimum penalty for (standard) inequalities

minimum penalty for matrix inequalities

feasibility of equality constraints

feasibility of (standard) inequalities

feasibility of matrix inequalities

complementarity of equality constraints

complementarity of (standard) inequalities

complementarity of matrix inequalities

DIMACS error measures [equation] (only if turned on by ‘DIMACS Measures’)

reserved for future use

statsfloat, ndarray, shape

On intermediate or final exit: solver statistics at the end of the current (or final) outer iteration as given in the table below. Note that time statistics is provided only if ‘Stats Time’ is set (the default is ‘NO’), the measured time is returned in seconds.

Number of the outer iterations.

Total number of the inner iterations.

Total number of the linesearch steps.

Number of evaluations of the augmented Lagrangian , (see (8)).

Number of evaluations of .

Number of evaluations of .

Reserved for future use.

Total running time of the solver.

Total running time of the solver without evaluations of the user’s functions and monitoring stops.

Time spent in the inner iterations.

Time spent in Lagrangian multipliers updates.

Time spent in penalty parameters updates.

Time spent in matrix feasibility computation.

Time of evaluations of .

Time of evaluations of .

Time of evaluations of .

Time of factorizations of the Newton system.

Time of factorizations of the matrix constraints.

reserved for future use.

informint

Note: intermediate stops take place only if .

On intermediate exit: .

On final exit: .

Other Parameters
‘Defaults’valueless

This special keyword may be used to reset all options to their default values. Any value given with this keyword will be ignored.

‘DIMACS Measures’str

Default

If the problem is a linear semidefinite programming problem, this argument specifies if DIMACS error measures (see Stopping Criteria) should be computed and/or checked. In other cases, this option reverts to ‘NO’ automatically.

Constraint: , or .

‘Hessian Density’str

Default

This option guides the solver on how the Hessian matrix of augmented Lagrangian should be built. Option ‘AUTO’ leaves the decision to the solver and it is the recommended option. Setting it to ‘DENSE’ bypasses the autodetection and the Hessian is always built as a dense matrix. Option ‘SPARSE’ instructs the solver to use a sparse storage and factorization of the matrix if possible.

Constraint: , or

‘Infinite Bound Size’float

Default

This defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to will be regarded as (and similarly any lower bound less than or equal to will be regarded as ). Note that a modification of this option does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.

Constraint: .

‘Initial P’str

Default

This option defines the choice of the penalty options , , see Algorithm 1.

The penalty options are chosen automatically as set by option ‘Init Value P’, ‘Init Value Pmat’ and subject to automatic scaling. Note that might be increased so that the penalty function is defined for all matrix constraints at the starting point.

The penalty options are kept from the previous run of the solver if possible. If not, this options reverts to ‘AUTOMATIC’. Note that even if the matrix penalty options are the same as in the previous run, they are still subject to a possible increase so that the penalty function is well defined at the starting point.

Constraint: or .

‘Initial U’str

Default

This argument guides the solver on which initial Lagrangian multipliers are to be used.

The Lagrangian multipliers are chosen automatically as set by automatic scaling.

The values of arrays and (if provided) are used as the initial Lagrangian multipliers subject to automatic adjustments. If one or the other array is not provided, the choice for missing data is as in ‘AUTOMATIC’.

The Lagrangian multipliers are kept from the previous run of the solver. If this option is set for the first run or options change the approach of the solver, the choice automatically reverts to ‘AUTOMATIC’. This might be useful if the solver is hot started, for example, to achieve higher precision of the solution.

Constraint: , or .

‘Initial X’str

Default

This argument guides which starting point is to be used.

The starting point is chosen automatically so that it satisfies simple bounds on the variables or as a zero vector. Input of argument is ignored.

Initial values of argument are used as a starting point.

Constraint: or .

‘Init Value P’float

Default

This argument defines the value , the initial penalty option for (standard) inequalities. A low value of the penalty causes the solution of the inner problem to be closer to the feasible region and thus to the desirable result. However, it also increases ill-conditioning of the system. It is not advisable to set the penalty too low unless a good starting point is provided.

Constraint: .

‘Init Value Pmat’float

Default

The value of this option suggests , the initial penalty option for matrix inequalities. It is similar to ‘Init Value P’ (and the same advice applies), however, gets increased automatically if the matrix constraints are more infeasible than the actual penalty option.

Constraint: .

‘Inner Iteration Limit’int

Default

The maximum number of the inner iterations (Newton steps) to be performed by Algorithm 2 in each outer iteration. Setting the option too low might lead to = 23. Values higher than are unlikely to improve convergence.

Constraint: .

‘Inner Stop Criteria’str

Default

The precision for the solution of the inner subproblem is determined in Algorithm 1 and under typical circumstances Algorithm 2 is expected to reach this precision within the given ‘Inner Iteration Limit’. If any problems are detected and , Algorithm 2 is allowed to stop before reaching the requested precision or the ‘Inner Iteration Limit’. This usually saves many unfruitful iterations and the solver may recover in the following iterations. If you suspect that the heuristic problem detection is not suitable for your problem, setting disallows such behaviour.

Constraint: or .

‘Inner Stop Tolerance’float

Default

This option sets the required precision for the first inner problem solved by Algorithm 2. The precison of the solution of the inner problem does not need to be very high in the first outer iterations and it is automatically adjusted through the outer iterations to reach the optimality limit in the last one.

Setting too restrictive (too low) causes an increase of the number of inner iterations needed in the first outer iterations and might lead to = 23. In certain cases it might be helpful to use a more relaxed (higher) and increase ‘P Update Speed’ which should reduce the number of inner iterations needed at the beginning of the computation in exchange for a possibly higher number of the outer iterations.

Constraint: .

‘Linesearch Mode’str

Default

This controls the step size selection in Algorithm 2. If (the default for linear problems), unit steps are taken where possible and the step shortening takes place only to avoid undefined regions for the matrix penalty function (see (6)). This may be used for linear problems but it is not recommended for any nonlinear ones. If , Armijo backtracking linesearch is used instead which is a fairly basic linesearch. If , a cubic safe guarded linesearch based on Goldstein condition is employed, this is the recommended (and default) choice for nonlinear problems.

Constraint: , , or .

‘List’str

Default

This argument may be set to ‘YES’ if you wish to turn on printing of each option specification as it is supplied.

Constraint: or

‘Monitor Frequency’int

Default

If , the solver returns to you at the end of every th outer iteration. During these intermediate exits, the current point and Lagrangian multipliers , (if requested) are provided as well as the statistics and error measures (, ). Argument helps to distinguish between intermediate and final exits and also allows immediate termination.

If , the solver stops only once on the final point and no intermediate exits are made.

Constraint: .

‘Monitoring File’int

Default

If , the unit number for the secondary (monitoring) output. If set to , no secondary output is provided. The following information is output to the unit:

  • a listing of the options;

  • problem statistics, the iteration log and the final status as set by ‘Monitoring Level’.

Constraint: .

‘Monitoring Level’int

Default

This argument sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with ‘Print Level’.

Constraint: .

‘Outer Iteration Limit’int

Default

The maximum number of the outer iterations to be performed by Algorithm 1. If , no iteration is performed, only quantities needed in the stopping criteria are computed and returned in . This might be useful in connection with and to check optimality of the given point. However, note that the rules for possible modifications of the starting point still apply, see and . Setting the option too low might lead to = 22.

Constraint: .

‘P Min’float

Default

This controls , the lowest possible penalty value used for (standard) inequalities. In general, very small values of the penalty options cause ill-conditioning which might lead to numerical difficulties. On the other hand, very high prevents the algorithm from reaching the requested accuracy on the feasibility. Under normal circumstances, the default value is recommended.

Constraint: .

‘Pmat Min’float

Default

This is an equivalent of ‘P Min’ for the minimal matrix penalty option . The same advice applies.

Constraint: .

‘Preference’str

Default

This option affects how contributions from the matrix constraints (6) to the system Hessian matrix are computed. The default option of should be suitable in most cases. However, dealing with matrix constraints of a very high dimension may cause noticable memory overhead and switching to may be required.

Constraint: or .

‘Presolve Block Detect’str

Default

If , the matrix constraints are checked during preprocessoring to determine if they can be split into smaller independent ones, thus speeding up the solver.

Constraint: or .

‘Print File’int

Default

If , the unit number for the primary output of the solver. If , the primary output is completely turned off independently of other settings. The default value is the advisory message unit number at the time of the options initialization, e.g., at the initialization of the handle. The following information is output to the unit:

  • a listing of options if set by ‘Print Options’;

  • problem statistics, the iteration log and the final status from the solver as set by ‘Print Level’.

Constraint: .

‘Print Level’int

Default

This argument defines how detailed information should be printed by the solver to the primary output.

Output

No output from the solver

Only the final status and the objective value

Problem statistics, one line per outer iteration showing the progress of the solution, final status and statistics

As level but detailed output of the outer iterations is provided and brief overview of the inner iterations

,

As level but details of the inner iterations are printed as well

Constraint: .

‘Print Options’str

Default

If , a listing of options will be printed to the primary output.

Constraint: or .

‘P Update Speed’int

Default

This option affects the rate at which the penalty options are updated (Algorithm 1, step (3)) and thus indirectly influences the overall number of outer iterations. Its value can be interpretted as the typical number of outer iterations needed to get from the initial penalty values , half-way to the and . Values smaller than causes a very agressive penalty update strategy which might lead to the increased number of inner iterations and possibly to numerical difficulties. On the other hand, values higher than produce a relatively conservative approach which leads to a higher number of the outer iterations.

If the solver encounters difficulties on your problem, a higher value might help. If your problem is working fine, setting a lower value might increase the speed.

Constraint: .

‘Stats Time’str

Default

This argument turns on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice ‘YES’ is equivalent to ‘WALL CLOCK’.

Constraint: , , or .

‘Stop Criteria’str

Default

If , the solver is allowed to stop prematurely with a suboptimal solution, = 50, if it predicts that a better estimate of the solution cannot be reached. This is the recommended option.

Constraint: or .

‘Stop Tolerance 1’float

Default

This option defines used as a tolerance for the relative duality gap (0) and the relative precision (1), see Stopping Criteria.

Constraint: .

‘Stop Tolerance 2’float

Default

This option sets the value which is used for optimality (2) and complementarity (4) tests from KKT conditions or if for all DIMACS error measures instead. See Stopping Criteria.

Constraint: .

‘Stop Tolerance Feasibility’float

Default

This argument places an acceptance limit on the feasibility of the solution (3), . See Stopping Criteria.

Constraint: .

‘Task’str

Default

This argument specifies the required direction of the optimization. If , the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function was set, ‘Task’ reverts to ‘FEASIBLE POINT’ automatically.

Constraint: , or .

‘Transform Constraints’str

Default

This argument controls how equality constraints are treated by the solver. If , all equality constraints from (4) are treated as two inequalities and , see Solution of the inner problem. This is the default and the only option in this release for equality constrained problems.

Constraint: , or .

‘U Update Restriction’float

Default

This defines the value giving the bounds on the updates of Lagrangian multipliers for (standard) inequalities between the outer iterations. Values close to limit the changes of the multipliers and serve as a kind of smoothing, lower values allow more significant changes.

Based on numerical experience, big variation in the multipliers may lead to a large number of iterations in the subsequent step and might disturb the convergence due to ill-conditioning.

It might be worth experimenting with the value on your particular problem. Mid range values are recommended over the more extremal ones.

Constraint: .

‘Umat Update Restriction’float

Default

This is an equivalent of ‘U Update Restriction’ for matrix constraints, denoted as in Overview. The advice above applies equally.

Constraint: .

Raises
NagValueError
(errno )

has not been initialized.

(errno )

does not belong to the NAG optimization modelling suite, has not been initialized properly or is corrupted.

(errno )

has not been initialized properly or is corrupted.

(errno )

This solver does not support the model defined in the handle.

(errno )

The problem is already being solved.

(errno )

On entry, , expected .

Constraint: must match the current number of variables of the model in the .

(errno )

On entry, .

does not match the size of the Lagrangian multipliers for (standard) constraints.

or .

(errno )

On entry, .

does not match the size of the Lagrangian multipliers for (standard) constraints.

when there are no (standard) constraints.

(errno )

On entry, .

does not match the size of the Lagrangian multipliers for matrix constraints.

or .

(errno )

On entry, .

does not match the size of the Lagrangian multipliers for matrix constraints.

when there are no matrix constraints.

(errno )

On entry, .

does not match the size of the Lagrangian multipliers for second-order cone constraints.

when there are no second-order cone constraints.

(errno )

The current starting point is unusable.

(errno )

The problem was found to be infeasible during preprocessing.

(errno )

The problem was found unbounded during preprocessing.

(errno )

The problem seems to be infeasible, the algorithm was stopped.

(errno )

The problem seems to be unbounded, the algorithm was stopped.

Warns
NagAlgorithmicWarning
(errno )

The algorithm converged to a suboptimal solution.

The full accuracy was not achieved. The solution should still be usable.

NagAlgorithmicMajorWarning
(errno )

User requested termination during a monitoring step.

(errno )

Outer iteration limit has been reached.

The requested accuracy is not achieved.

(errno )

The inner subproblem could not be solved to the required accuracy.

Inner iteration limit has been reached.

(errno )

The inner subproblem could not be solved to the required accuracy.

Limited progress in the inner subproblem triggered a stop (heuristic inner stop criteria).

(errno )

The inner subproblem could not be solved to the required accuracy.

Line search or another internal component failed.

(errno )

Unable to make progress, the algorithm was stopped.

Notes

handle_solve_pennon serves as a solver for compatible problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for functions in the NAG optimization modelling suite. First, the problem handle is initialized by calling handle_init(). Then some of the functions handle_set_linobj(), handle_set_quadobj(), handle_set_simplebounds(), handle_set_linconstr(), handle_set_linmatineq() or handle_set_quadmatineq() may be used to formulate the objective function, (standard) constraints and matrix constraints of the problem. Once the problem is fully set, the handle may be passed to the solver. When the handle is no longer needed, handle_free() should be called to destroy it and deallocate the memory held within. See the E04 Introduction for more details about the NAG optimization modelling suite.

Problems which can be defined this way are, for example, (generally nonconvex) Quadratic Programming (QP)

linear semidefinite programming problems (SDP)

or semidefinite programming problems with bilinear matrix inequalities (BMI-SDP)

Here , and are -dimensional vectors, is a symmetric matrix, , are -dimensional vectors, is a general rectangular matrix and , are symmetric matrices. The expression stands for a constraint on eigenvalues of a symmetric matrix , namely, all the eigenvalues should be non-negative, i.e., the matrix should be positive semidefinite. See relevant functions in the suite for more details on the problem formulation.

The solver is based on a generalized Augmented Lagrangian method with a suitable choice of standard and matrix penalty functions. For a detailed description of the algorithm see Algorithmic Details. Under standard assumptions on the problem (Slater constraint qualification, boundedness of the objective function on the feasible set, see Stingl (2006) for details) the algorithm converges to a local solution. In case of convex problems such as linear SDP or convex QP, this is the global solution. The solver is suitable for both small dense and large-scale sparse problems.

The algorithm behaviour and solver strategy can be modified by various options (see Other Parameters) which can be set by handle_opt_set() and handle_opt_set_file() anytime between the initialization of the handle by handle_init() and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or options.

There are several options with a multiple choice where the default choice is ‘AUTO’ (for example, ‘Hessian Density’). This value means that the decision over the option is left to the solver based on the structure of the problem. Option getter handle_opt_get() can be called to retrieve the choice of these options as well as on any other options.

Option ‘Task’ may be used to switch the problem to maximization or to ignore the objective function and find only a feasible point.

Option ‘Monitor Frequency’ may be used to turn on the monitor mode of the solver. The solver invoked in this mode pauses regularly even before the optimal point is found to allow monitoring the progress from the calling program. All the important error measures and statistics are available in the calling program which may terminate the solver early if desired (see argument ).

Structure of the Lagrangian Multipliers

The algorithm works internally with estimates of both the decision variables, denoted by , and the Lagrangian multipliers (dual variables) for standard and matrix constraints, denoted by and , respectively. You may provide initial estimates, request approximations during the run (the monitor mode turned on) and obtain the final values. The Lagrangian multipliers are split into two arrays, the multipliers for (standard) constraints are stored in array and multipliers for matrix constraints in array . Both arrays need to conform to the structure of the constraints.

If the simple bounds were defined (handle_set_simplebounds() was successfully called), the first elements of belong to the corresponding Lagrangian multipliers, interleaving a multiplier for the lower and for the upper bound for each . If any of the bounds were set to infinity, the corresponding Lagrangian multipliers are set to and may be ignored.

Similarly, the following elements of belong to multipliers for the linear constraints, if formulated by handle_set_linconstr(). The organization is the same, i.e., the multipliers for each constraint for the lower and upper bounds are alternated and zeroes are used for any missing (infinite bound) constraint.

A Lagrangian multiplier for a matrix constraint (one block) of dimension is a dense symmetric matrix of the same dimension. All multipliers are stored next to each other in array in the same order as the matrix constraints were defined by handle_set_linmatineq() and handle_set_quadmatineq(). The lower triangle of each is stored in the packed column order (see the F07 Introduction). For example, if there are four matrix constraints of dimensions , , , , the dimension of array should be . The first elements belong to the packed lower triangle of , followed by six elements of and one element for each and .

Approximation of the Lagrangian Multipliers

By the nature of the algorithm, all inequality Lagrangian multiplier are always kept positive during the computational process. This applies even to Lagrangian multipliers of inactive constraints at the solution. They will only be close to zero although they would normally be equal to zero exactly. This is one of the major differences between results from solvers based on the active set method (such as qpconvex2_sparse_solve()) and others, such as, handle_solve_pennon or interior point methods. As a consequence, the initial estimate of the multipliers (if provided) might be adjusted by the solver to be sufficiently positive, also the estimates returned during the intermediate exits might only be a very crude approximation to their final values as they do not satisfy all the Karush–Kuhn–Tucker (KKT) conditions.

Another difference is that qpconvex2_sparse_solve() merges multipliers for both lower and upper inequality into one element whose sign determines the inequality because there can be at most one active constraint and multiplier for the inactive is exact zero. Negative multipliers are associated with the upper bounds and positive with the lower bounds. On the other hand, handle_solve_pennon works with both multipliers at the same time so they are returned in two elements, one for the lower bound, the other for the upper bound (see Structure of the Lagrangian Multipliers). An equivalent result can be achieved by subtracting the upper bound multiplier from the lower one. This holds even when equalities are interpreted as two inequalities (see option ‘Transform Constraints’).

References

Ben–Tal, A and Zibulevsky, M, 1997, Penalty/barrier multiplier methods for convex programming problems, SIAM Journal on Optimization (7), 347–366

Fujisawa, K, Kojima, M, Nakata, K, 1997, Exploiting sparsity in primal-dual interior-point method for semidefinite programming, Math. Programming (79), 235–253

Hogg, J D and Scott, J A, 2011, HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems, RAL Technical Report. RAL-TR-2011-024

Kočvara, M and Stingl, M, 2003, PENNON – a code for convex nonlinear and semidefinite programming, Optimization Methods and Software (18(3)), 317–333

Kočvara, M and Stingl, M, 2007, On the solution of large-scale SDP problems by the modified barrier method using iterative solvers, Math. Programming (Series B) (109(2–3)), 413–444

Mittelmann, H D, 2003, An independent benchmarking of SDP and SOCP solvers, Math. Programming (95), 407–430

Stingl, M, 2006, On the Solution of Nonlinear Semidefinite Programs by Augmented Lagrangian Methods, PhD thesis, Institute of Applied Mathematics II, Friedrich–Alexander University of Erlangen–Nuremberg