NAG FL Interface
e04svf (handle_​solve_​pennon)

Note: this routine uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
Settings help

FL Name Style:


FL Specification Language:


1 Purpose

e04svf is a solver from the NAG optimization modelling suite for problems such as, Quadratic Programming (QP) , linear Semidefinite Programming (SDP) and semidefinite programming with bilinear matrix inequalities (BMI-SDP).

2 Specification

Fortran Interface
Subroutine e04svf ( handle, nvar, x, nnzu, u, nnzuc, uc, nnzua, ua, rinfo, stats, inform, ifail)
Integer, Intent (In) :: nvar, nnzu, nnzuc, nnzua
Integer, Intent (Inout) :: inform, ifail
Real (Kind=nag_wp), Intent (Inout) :: x(nvar), u(nnzu), uc(nnzuc), ua(nnzua)
Real (Kind=nag_wp), Intent (Out) :: rinfo(32), stats(32)
Type (c_ptr), Intent (In) :: handle
C Header Interface
#include <nag.h>
void  e04svf_ (void **handle, const Integer *nvar, double x[], const Integer *nnzu, double u[], const Integer *nnzuc, double uc[], const Integer *nnzua, double ua[], double rinfo[], double stats[], Integer *inform, Integer *ifail)
The routine may be called by the names e04svf or nagf_opt_handle_solve_pennon.

3 Description

e04svf serves as a solver for compatible problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as a means of communication for routines in the NAG optimization modelling suite. First, the problem handle is initialized by calling e04raf. Then some of the routines e04ref, e04rff, e04rhf, e04rjf, e04rnf or e04rpf may be used to formulate the objective function, (standard) constraints and matrix constraints of the problem. Once the problem is fully set, the handle may be passed to the solver. When the handle is no longer needed, e04rzf should be called to destroy it and deallocate the memory held within. See Section 3.1 in the E04 Chapter Introduction for more details about the NAG optimization modelling suite.
Problems which can be defined this way are, for example, (generally nonconvex) Quadratic Programming (QP)
minimize xn 12 xTHx + cTx   (a) subject to lBBxuB,   (b) lxxux ,   (c) (1)
linear semidefinite programming problems (SDP)
minimize xn cTx   (a) subject to   i=1 n xi Aik - A0k 0 ,  k=1,,mA ,   (b) lBBxuB,   (c) lxxux,   (d) (2)
or semidefinite programming problems with bilinear matrix inequalities (BMI-SDP)
minimize xn 12 xTHx + cTx   (a) subject to   i,j=1 n xi xj Qijk + i=1 n xi Aik - A0k 0 ,  k=1,,mA ,   (b) lBBxuB,   (c) lxxux .   (d) (3)
Here c, lx and ux are n-dimensional vectors, H is a symmetric n×n matrix, lB, uB are mB-dimensional vectors, B is a general mB×n rectangular matrix and Aik, Qijk are symmetric matrices. The expression S0 stands for a constraint on eigenvalues of a symmetric matrix S, namely, all the eigenvalues should be non-negative, i.e., the matrix should be positive semidefinite. See relevant routines in the suite for more details on the problem formulation.
The solver is based on a generalized Augmented Lagrangian method with a suitable choice of standard and matrix penalty functions. For a detailed description of the algorithm see Section 11. Under standard assumptions on the problem (Slater constraint qualification, boundedness of the objective function on the feasible set, see Stingl (2006) for details) the algorithm converges to a local solution. In case of convex problems such as linear SDP or convex QP, this is the global solution. The solver is suitable for both small dense and large-scale sparse problems.
The algorithm behaviour and solver strategy can be modified by various optional parameters (see Section 12) which can be set by e04zmf and e04zpf anytime between the initialization of the handle by e04raf and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or optional parameters.
There are several optional parameters with a multiple choice where the default choice is AUTO (for example, Hessian Density). This value means that the decision over the option is left to the solver based on the structure of the problem. Option getter e04znf can be called to retrieve the choice of these options as well as on any other options.
Optional parameter Task may be used to switch the problem to maximization or to ignore the objective function and find only a feasible point.
Optional parameter Monitor Frequency may be used to turn on the monitor mode of the solver. The solver invoked in this mode pauses regularly even before the optimal point is found to allow monitoring the progress from the calling program. All the important error measures and statistics are available in the calling program which may terminate the solver early if desired (see argument inform).

3.1 Structure of the Lagrangian Multipliers

The algorithm works internally with estimates of both the decision variables, denoted by x, and the Lagrangian multipliers (dual variables) for standard and matrix constraints, denoted by u and U, respectively. You may provide initial estimates, request approximations during the run (the monitor mode turned on) and obtain the final values. The Lagrangian multipliers are split into two arrays, the multipliers u for (standard) constraints are stored in array u and multipliers U for matrix constraints in array ua. Both arrays need to conform to the structure of the constraints.
If the simple bounds were defined (e04rhf was successfully called), the first 2n elements of u belong to the corresponding Lagrangian multipliers, interleaving a multiplier for the lower and for the upper bound for each xi. If any of the bounds were set to infinity, the corresponding Lagrangian multipliers are set to 0 and may be ignored.
Similarly, the following 2mB elements of u belong to multipliers for the linear constraints, if formulated by e04rjf. The organization is the same, i.e., the multipliers for each constraint for the lower and upper bounds are alternated and zeroes are used for any missing (infinite bound) constraint.
A Lagrangian multiplier for a matrix constraint (one block) of dimension d×d is a dense symmetric matrix of the same dimension. All multipliers U are stored next to each other in array ua in the same order as the matrix constraints were defined by e04rnf and e04rpf. The lower triangle of each is stored in the packed column order (see Section 3.3.2 in the F07 Chapter Introduction). For example, if there are four matrix constraints of dimensions 7, 3, 1, 1, the dimension of array ua should be 36. The first 28 elements (d1×(d1+1)/2) belong to the packed lower triangle of U1, followed by six elements of U2 and one element for each U3 and U4. See for example Section 10 in e04rdf.

3.2 Approximation of the Lagrangian Multipliers

By the nature of the algorithm, all inequality Lagrangian multiplier u,U are always kept positive during the computational process. This applies even to Lagrangian multipliers of inactive constraints at the solution. They will only be close to zero although they would normally be equal to zero exactly. This is one of the major differences between results from solvers based on the active set method (such as e04nqf) and others, such as, e04svf or interior point methods. As a consequence, the initial estimate of the multipliers (if provided) might be adjusted by the solver to be sufficiently positive, also the estimates returned during the intermediate exits might only be a very crude approximation to their final values as they do not satisfy all the Karush–Kuhn–Tucker (KKT) conditions.
Another difference is that e04nqf merges multipliers for both lower and upper inequality into one element whose sign determines the inequality because there can be at most one active constraint and multiplier for the inactive is exact zero. Negative multipliers are associated with the upper bounds and positive with the lower bounds. On the other hand, e04svf works with both multipliers at the same time so they are returned in two elements, one for the lower bound, the other for the upper bound (see Section 3.1). An equivalent result can be achieved by subtracting the upper bound multiplier from the lower one. This holds even when equalities are interpreted as two inequalities (see optional parameter Transform Constraints).

4 References

Ben–Tal A and Zibulevsky M (1997) Penalty/barrier multiplier methods for convex programming problems SIAM Journal on Optimization 7 347–366
Fujisawa K, Kojima M, Nakata K (1997) Exploiting sparsity in primal-dual interior-point method for semidefinite programming Math. Programming 79 235–253
Hogg J D and Scott J A (2011) HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems RAL Technical Report. RAL-TR-2011-024
Kočvara M and Stingl M (2003) PENNON – a code for convex nonlinear and semidefinite programming Optimization Methods and Software 18(3) 317–333
Kočvara M and Stingl M (2007) On the solution of large-scale SDP problems by the modified barrier method using iterative solvers Math. Programming (Series B) 109(2–3) 413–444
Mittelmann H D (2003) An independent benchmarking of SDP and SOCP solvers Math. Programming 95 407–430
Stingl M (2006) On the Solution of Nonlinear Semidefinite Programs by Augmented Lagrangian Methods, PhD thesis Institute of Applied Mathematics II, Friedrich–Alexander University of Erlangen–Nuremberg

5 Arguments

1: handle Type (c_ptr) Input
On entry: the handle to the problem. It needs to be initialized (e.g., by e04raf) and to hold a problem formulation compatible with e04svf. It must not be changed between calls to the NAG optimization modelling suite.
2: nvar Integer Input
On entry: n, the current number of decision variables x in the model.
3: x(nvar) Real (Kind=nag_wp) array Input/Output
Note: intermediate stops take place only if Monitor Frequency>0.
On entry: if Initial X=USER (the default), x0, the initial estimate of the variables x; otherwise, x need not be set.
On intermediate exit: the value of the variables x at the end of the current outer iteration.
On intermediate re-entry: the input is ignored.
On final exit: the final value of the variables x.
4: nnzu Integer Input
On entry: the dimension of array u.
If nnzu=0, u will not be referenced; otherwise, it needs to match the dimension of constraints defined by e04rhf and e04rjf as explained in Section 3.1.
Constraint: nnzu0.
5: u(nnzu) Real (Kind=nag_wp) array Input/Output
Note: intermediate stops take place only if Monitor Frequency>0.
if nnzu>0, u holds Lagrangian multipliers (dual variables) for (standard) constraints, i.e., simple bounds defined by e04rhf and a set of mB linear constraints defined by e04rjf. Either their initial estimates, intermediate approximations or final values, see Section 3.1.
if nnzu=0, u will not be referenced.
On entry: if Initial U=USER (the default is AUTOMATIC), u0, the initial estimate of the Lagrangian multipliers u; otherwise, u need not be set.
On intermediate exit: the estimate of the multipliers u at the end of the current outer iteration.
On intermediate re-entry: the input is ignored.
On exit: the final value of multipliers u.
6: nnzuc Integer Input
On entry: the dimension of array uc. If nnzuc=0, uc will not be referenced. This argument is reserved for future releases of the NAG Library which will allow definition of second-order cone constraints. It needs to be set to 0 at the moment.
Constraint: nnzuc=0.
7: uc(nnzuc) Real (Kind=nag_wp) array Input/Output
uc is reserved for future releases of the NAG Library which will allow definition of second-order cone constraints. It is not referenced at the moment.
8: nnzua Integer Input
On entry: the dimension of array ua. If nnzua=0, ua will not be referenced; otherwise, it needs to match the total number of nonzeros in all matrix Lagrangian multipliers (constraints defined by e04rnf and e04rpf) as explained in Section 3.1.
Constraint: nnzua0.
9: ua(nnzua) Real (Kind=nag_wp) array Input/Output
Note: intermediate stops take place only if Monitor Frequency>0.
If nnzua>0, ua holds the Lagrangian multipliers for matrix constraints defined by e04rnf and e04rpf, see Section 3.1.
If nnzua=0, ua will not be referenced.
On entry: if Initial U=USER (the default is AUTOMATIC), U0, the initial estimate of the matrix Lagrangian multipliers U; otherwise, ua need not be set.
On intermediate exit: the estimate of the matrix multipliers U at the end of the outer iteration.
On intermediate re-entry: the input is ignored.
On final exit: the final estimate of the multipliers U.
10: rinfo(32) Real (Kind=nag_wp) array Output
On intermediate or final entry: error measures and various indicators (see Section 11 for details) at the end of the current (or final) outer iteration as given in the table below:
1 objective function value f(x)
2 optimality (12)
3 feasibility (13)
4 complementarity (14)
5 minimum penalty
6 relative precision (11)
7 relative duality gap (10)
8 precision |f(x)-f(x+1)|
9 duality gap
10 minimum penalty for (standard) inequalities p
11 minimum penalty for matrix inequalities P
12 feasibility of equality constraints
13 feasibility of (standard) inequalities
14 feasibility of matrix inequalities
15 complementarity of equality constraints
16 complementarity of (standard) inequalities
17 complementarity of matrix inequalities
1823 DIMACS error measures (16) (only if turned on by DIMACS Measures)
2432 reserved for future use
11: stats(32) Real (Kind=nag_wp) array Output
On intermediate or final exit: solver statistics at the end of the current (or final) outer iteration as given in the table below. Note that time statistics is provided only if Stats Time is set (the default is NO), the measured time is returned in seconds.
1 Number of the outer iterations.
2 Total number of the inner iterations.
3 Total number of the linesearch steps.
4 Number of evaluations of the augmented Lagrangian F(), (see (8)).
5 Number of evaluations of F().
6 Number of evaluations of 2F().
7 Reserved for future use.
8 Total running time of the solver.
9 Total running time of the solver without evaluations of the user's functions and monitoring stops.
10 Time spent in the inner iterations.
11 Time spent in Lagrangian multipliers updates.
12 Time spent in penalty parameters updates.
13 Time spent in matrix feasibility computation.
14 Time of evaluations of F().
15 Time of evaluations of F().
16 Time of evaluations of 2F().
17 Time of factorizations of the Newton system.
18 Time of factorizations of the matrix constraints.
1932 reserved for future use.
12: inform Integer Input/Output
Note: intermediate stops take place only if Monitor Frequency>0.
On initial entry: no effect.
On intermediate exit: inform=1.
On intermediate re-entry: if set to 0, solving the current problem is terminated and the routine returns ifail=20; otherwise, the routine continues.
On final exit: inform=0.
13: ifail Integer Input/Output
On entry: ifail must be set to 0, -1 or 1 to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of 0 causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of -1 means that an error message is printed while a value of 1 means that it is not.
If halting is not appropriate, the value -1 or 1 is recommended. If message printing is undesirable, then the value 1 is recommended. Otherwise, the value -1 is recommended since useful values can be provided in some output arguments even when ifail0 on exit. When the value -1 or 1 is used it is essential to test the value of ifail on exit.
On exit: ifail=0 unless the routine detects an error or a warning has been flagged (see Section 6).

6 Error Indicators and Warnings

If on entry ifail=0 or -1, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
Note: in some cases e04svf may return useful information.
ifail=1
The supplied handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been properly initialized or it has been corrupted.
ifail=2
This solver does not support the model defined in the handle.
ifail=3
The problem is already being solved.
ifail=4
On entry, nvar=value, expected value=value.
Constraint: nvar must match the current number of variables of the model in the handle.
ifail=5
On entry, nnzu=value.
nnzu does not match the size of the Lagrangian multipliers for (standard) constraints.
nnzu=0 or value.
On entry, nnzu=value.
nnzu does not match the size of the Lagrangian multipliers for (standard) constraints.
nnzu=0 when there are no (standard) constraints.
On entry, nnzua=value.
nnzua does not match the size of the Lagrangian multipliers for matrix constraints.
nnzua=0 or value.
On entry, nnzua=value.
nnzua does not match the size of the Lagrangian multipliers for matrix constraints.
nnzua=0 when there are no matrix constraints.
On entry, nnzuc=value.
nnzuc does not match the size of the Lagrangian multipliers for second-order cone constraints.
nnzuc=0 when there are no second-order cone constraints.
ifail=20
User requested termination during a monitoring step.
ifail=21
The current starting point is unusable.
The starting point x0, either provided by you (if Initial X=USER, the default) or the automatic estimate (if Initial X=AUTOMATIC), must not be extremely infeasible in the matrix constraints (infeasibility of order 106 and higher) and all the functions used in the problem formulation must be evaluatable.
In the unlikely case this error is triggered, it is necessary to provide a better estimate of the initial values.
ifail=22
Outer iteration limit has been reached.
The requested accuracy is not achieved.
If Outer Iteration Limit is left to the default, this error indicates numerical difficulties. Consider whether the stopping tolerances (Stop Tolerance 1, Stop Tolerance 2, Stop Tolerance Feasibility) are set too low or optional parameters affecting the behaviour of the penalty updates (P Update Speed, P Min or Pmat Min) have been modified inadvisedly. The iteration log should reveal more about the misbehaviour. Providing a different starting point might be of help in certain situations.
ifail=23
The inner subproblem could not be solved to the required accuracy.
Inner iteration limit has been reached.
The inner subproblem could not be solved to the required accuracy.
Limited progress in the inner subproblem triggered a stop (heuristic inner stop criteria).
The inner subproblem could not be solved to the required accuracy.
Line search or another internal component failed.
A problem with the convergence of the inner subproblem is typically a sign of numerical difficulties of the whole algorithm. The inner subproblem might be stopped before reaching the required accuracy because of the Inner Iteration Limit, a heuristic detected no progress in the inner iterations (if Inner Stop Criteria=HEURISTIC, default) or if an internal component failed (for example, line search was unable to find a suitable step). The algorithm tries to recover, however, it might give up after several attempts with one of these error messages. If it occurs in the very early iterations, consider increasing Inner Stop Tolerance and possibly Init Value P or Init Value Pmat which should ease the first iterations. An occurrence in later iterations indicates numerical difficulties typically due to scaling and/or ill-conditioning or the problem is close to infeasible. Reducing the tolerance on the stopping criteria or increasing P Update Speed might be of limited help.
ifail=24
Unable to make progress, the algorithm was stopped.
This error is returned if the solver cannot decrease the duality gap over a range of iterations. This can be due to the scaling of the problem or the problem might be close to primal or dual infeasibility.
ifail=50
The algorithm converged to a suboptimal solution.
The full accuracy was not achieved. The solution should still be usable.
This error may be reported only if Stop Criteria=SOFT (default). The solver predicted that it is unable to reach a better estimate of the solution. However, the error measures indicate that the point is a reasonable approximation. Typically, only the norm of the gradient of the Lagrangian (optimality) does not fully satisfy the requested tolerance whereas the others are well below the tolerance.
Setting Stop Criteria=STRICT will disallow this error but it is unlikely that the algorithm would reach a better solution.
ifail=51
The problem was found to be infeasible during preprocessing.
One or more of the constraints (or its part after preprocessing) violates the constraints by more than εfeas (Stop Tolerance Feasibility).
ifail=52
The problem was found unbounded during preprocessing.
The objective function consists of an unrestricted ray and thus the problem does not have a solution.
ifail=53
The problem seems to be infeasible, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is infeasible, several indirect indicators suggest that it might be the case.
ifail=54
The problem seems to be unbounded, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is unbounded, several indirect indicators (such as a rapid decrease in the objective function and a lack of convergence in the inner subproblem) suggest that this might be the case. A good scaling of the objective function is always highly recommended to avoid situations when unusual behavior triggers falsely this error exit.
ifail=-99
An unexpected error has been triggered by this routine. Please contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
ifail=-399
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
ifail=-999
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.

7 Accuracy

The accuracy of the solution is driven by optional parameters Stop Tolerance 1, Stop Tolerance 2, Stop Tolerance Feasibility and Stop Criteria and in certain cases DIMACS Measures.
If ifail=0 on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to ε) and thus it is a good estimate of a local solution. If ifail=50, some of the convergence conditions were not fully satisfied but the point still seems to be a reasonable estimate and should be usable. Please refer to Section 11.2 and the description of the particular options.

8 Parallelism and Performance

e04svf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04svf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.

9 Further Comments

9.1 Description of the Printed Output

The solver can print information to give an overview of the problem and of the progress of the computation. The output may be send to two independent unit numbers which are set by optional parameters Print File and Monitoring File. Optional parameters Print Level, Print Options and Monitoring Level determine the exposed level of detail. This allows, for example, to generate a detailed log in a file while the condensed information is displayed on the screen.
By default (Print File=6, Print Level=2), five sections are printed to the standard output: a header, a list of options, problem statistics, an iteration log and a summary.
Header
The header is a message indicating the start of the solver. It should look like:
--------------------------------
 E04SV, NLP-SDP Solver (Pennon)
--------------------------------
Optional parameters list
The list shows all options of the solver, each displayed on one line. The line contains the option name, its current value and an indicator for how it was set. The options left at their defaults are noted by ‘d’, the ones you set are noted by ‘U’ and the options reset by the solver by ‘S’. The solver will automatically set options which are set to AUTO or options which are not possible to satisfy in the given context (e.g., requesting DIMACS Measures for a nonlinear problem). Note that the output format is compatible with the file format expected by e04zpf. The output might look as follows:
Outer Iteration Limit         =                  20 * U
Stop Tolerance 1              =         1.00000E-06 * d
Stop Tolerance 2              =         1.00000E-07 * d
Hessian Density               =               Dense * S
Problem statistics
The statistics about the size of the problem shows how the problem is represented internally, i.e., it reflects any changes imposed by preprocessing (for example, removed fixed and disabled variables or constant feasible constraints) and problem transformations (see, for example, Presolve Block Detect). It may look like:
Problem Statistics
  No of variables                  7 (+0 disabled, +1 fixed)
    free (unconstrained)           0
    bounded                        7
  No of lin. constraints           8 (+0 disabled, +1 removed)
    nonzeroes                     41
  No of matrix inequal.            4
    detected matrix inq.           3 (+1 constant)
      linear                       3
      nonlinear                    0
      max. dimension               5
    detected normal inq.           1
      linear                       1
      nonlinear                    0
  Objective function          Linear
Iteration log
If Print Level=2, the status of each major iteration is condensed to one line. The line shows the major iteration number (0 represents the starting point), the current objective value, KKT measures (optimality, feasibility and complementarity), minimal penalty and the number of inner iterations performed. Note that all these values are also available in rinfo(1),,rinfo(5) and stats(1). The output might look as follows:
-----------------------------------------------------------------
 it |  objective |  optim  |   feas  |  compl  | pen min  | inner
-----------------------------------------------------------------
  0   0.00000E+00  7.34E+00  1.23E-01  4.41E+01  1.00E+00   0
  1  -3.01998E-01  2.54E-03  0.00E+00  1.89E+00  1.00E+00   6
  2  -2.53008E+00  1.06E-03  1.30E-01  3.22E-01  3.17E-01   8
  3  -2.08172E+00  6.52E-03  1.85E-02  4.54E-02  1.01E-01   7
  4  -2.01060E+00  6.47E-03  4.10E-03  1.02E-02  3.19E-02   3
Occasionally, a one letter flag is printed at the end of the line indicating that the inner subproblem was not solved to the required accuracy. The possibilities are M for maximum number of inner iterations, L for difficulties in the line search and ! when a heuristic stop took place. Repeated troubles in the subproblems may lead to ifail=23. The output below had Inner Iteration Limit=5 which was not enough in the first subproblem (first outer iteration).
 ----------------------------------------------------------------
  it |  objective |  optim  |   feas  |  compl  | pen min | inner
 ----------------------------------------------------------------
   0  0.00000E+00  1.46E+03  5.01E+01  1.46E+03  6.40E+01   0
   1  3.78981E+02  3.86E+01  0.00E+00  1.21E+04  6.40E+01   5 M
   2  9.11724E+02  1.46E-02  0.00E+00  9.24E+02  4.45E+01   5
All KKT measures should normally converge to zero as the algorithm progresses and once the requested accuracy (Stop Tolerance 2) is achieved, the solver stops. However, the convergence is not necessarilly monotonic. The penalty parameters are decreased each major iteration which should improve overall the feasibility of the problem. This also increases the ill-conditioning which might lead to a higher number of inner iterations. A very high number of inner iterations usually signals numerical difficulties. See Section 11 for the algorithmic details.
If Print Level>2, each major iteration produces significantly more detailed output comprising detailed error measures and output from every inner iteration. The output is self-explanatory so is not featured here in detail.
Summary
Once the solver finishes, a detailed summary is produced. An example is shown below:
--------------------------------------------------------------
Status: converged, an optimal solution found
--------------------------------------------------------------
Final objective value                2.300000E+01
Relative precision                   5.873755E-09
Optimality                           1.756062E-10
Feasibility                          9.048738E-08
Complementarity                      1.855566E-08
DIMACS error 1                       8.780308E-11
DIMACS error 2                       0.000000E+00
DIMACS error 3                       0.000000E+00
DIMACS error 4                       4.524369E-08
DIMACS error 5                       4.065998E-10
DIMACS error 6                       3.948012E-10
Iteration counts
  Outer iterations                             13
  Inner iterations                             82
  Linesearch steps                             95
Evaluation counts
  Augm. Lagr. values                           96
  Augm. Lagr. gradient                         96
  Augm. Lagr. hessian                          82
Timing
  Total time                    0 h  0 min  3 sec
    Evaluations + monitoring             0.04 sec
    Solver itself                        3.09 sec
  Inner minimization step                2.72 sec   ( 87.1%)
    Augm. Lagr. value                    0.28 sec   (  9.0%)
    Augm. Lagr. gradient                 0.67 sec   ( 21.6%)
    Augm. Lagr. hessian                  1.11 sec   ( 35.4%)
    system matr. factor.                 0.64 sec   ( 20.5%)
    const. matr. factor.                 0.40 sec   ( 12.8%)
  Multiplier update                      0.01 sec   (  0.3%)
  Penalty update                         0.02 sec   (  0.5%)
  Feasibility check                      0.15 sec   (  4.7%)
--------------------------------------------------------------
It starts with the status line of the overall result which matches the ifail value. It is followed by the final objective value and the error measures (including DIMACS Measures if turned on). Iteration counters, numbers of evaluations of the Augmented Lagrangian function and timing of the routine conclude the section. The timing of the algorithm is displayed only if Stats Time is set.

9.2 Internal Changes

Internal changes have been made to this routine as follows:
For details of all known issues which have been reported for the NAG Library please refer to the Known Issues.

10 Example

Semidefinite Programming has many applications in several fields of mathematics, such as, combinatorial optimization, finance, statistics, control theory or structural optimization. However, these applications seldom come in the form of (2) or (3). Usually a reformulation is needed or even a relaxation is employed to achieve the desired formulation. This is also the case of the Lovász ϑ function computed in this example. See also e04raf for links to further examples in the NAG optimization modelling suite.
The Lovász ϑ function (or also called ϑ number) of an undirected graph G=(V,E) is an important quantity in combinatorial optimization. It gives an upper bound to Shannon capacity of the graph G and is also related to the clique number and the chromatic number of the complement of G which are NP-hard problems.
The ϑ function can be expressed in various ways, here we use the following:
ϑ(G) = minimize { λ max (H) H 𝕊n ,  sij = 1 ​ if ​ i=j ​ or if ​ ij E }  
where n=|V| and 𝕊n denotes the space of real symmetric n×n matrices. This eigenvalue optimization problem is easy to reformulate as an SDP problem by introducing an artificial variable t as follows:
minimize t,H t subject to HtI, H𝕊n ,  sij = 1 ​ if ​ i=j ​ or if ​ ijE .  
Finally, this can be written as (2)) which is formulated in the example:
minimize t,x t subject to tI + ijE xij Eij - J 0 ,  
where J is a matrix of all ones and Eij is a matrix of all zeros except (i,j) and (j,i).
The example also demonstrates how to set the optional parameters and how to retrieve them.
The data file stores the Petersen graph whose ϑ is 4.

10.1 Program Text

Program Text (e04svfe.f90)

10.2 Program Data

Program Data (e04svfe.d)

10.3 Program Results

Program Results (e04svfe.r)

11 Algorithmic Details

This section contains a description of the algorithm used in e04svf which is based on the implementation of the code called Pennon. For further details, see Kočvara and Stingl (2003), Stingl (2006) and Kočvara and Stingl (2007).
For simplicity, we will use the following problem formulation; its connection to (SDP) and (BMI-SDP) is easy to see:
minimize xn f(x) subject to gk(x)0, k=1,2,,mg, hk(x)=0, k=1,2,,mh, Ak(x)0, k=1,2,,mA, (4)
where f, gk, hk are C2 functions from n to and Ak is a C2 matrix function from n to 𝕊mk. Here 𝕊m denotes the space of real symmetric matrices m×m and S 𝕊m, S0 stands for a constraint on eigenvalues of S, namely the matrix S should be positive semidefinite. Furthermore, we define the inner product on 𝕊m×A,B𝕊m=trace(AB). The index 𝕊m will be omitted whenever the dimension is clear from the context. Finally, for Φ:𝕊m𝕊m and X,Y𝕊m, DΦ(X;Y) denotes the directional derivative of Φ with respect to X in direction Y.

11.1 Overview

The algorithm is based on a (generalized) augmented Lagrangian approach and on a suitable choice of smooth penalty/barrier functions φg: for (standard) inequality constraints and φA: for constraints on matrix eigenvalues. By means of φA we define a penalty/barrier function for matrix inequalities as follows.
Let A𝕊m have an eigenvalue decomposition A=ST ΛS where Λ=diag(λ1,λ2,,λm)T. We define matrix function ΦP:𝕊m𝕊m for P>0 as
ΦP:AST ( PφA(λ1P) 0 0 0 PφA(λ2P) 0 0 0 PφA(λmP) ) S . (5)
Both φg and φA satisfy a number of assumptions (see Kočvara and Stingl (2003)) guaranteeing, in particular, that for any p, P>0
gk(x) 0 pφg(gk(x)/p) 0 , k=1,2,,mg, Ak(x) 0 ΦP(Ak(x)) 0 , k=1,2,,mA. (6)
Further in the text, we use simplified notation φp(·)=pφg(·/p).
Thus for any p, P>0, problem (4) has the same solution as the following augmented problem
minimize xn f(x) subject to φp(gk(x))0, k=1,2,,mg, hk(x)=0, k=1,2,,mh, ΦP(Ak(x))0, k=1,2,,mA. (7)
The Lagrangian of (7) can be viewed as a (generalized) augmented Lagrangian of (4):
F(x,u,v,U,p,P) = f(x) - k=1 mg uk φp(gk(x)) + k=1 mh vk hk(x) - k=1 mA Uk, ΦP (Ak(x)) ; (8)
where umg, vmh and U=(U1,,UmA), Uk𝕊pk, k=1,,mA are Lagrange multipliers associated with the (standard) inequalities and equalities and the matrix inequality constraints, respectively.
The algorithm combines ideas of the (exterior) penalty and (interior) barrier methods with the augmented Lagrangian method, it can be defined as follows:
Step (i) of Algorithm 1, further referred as the inner problem, is the most time-consuming and thus the choice of the solver for (9) is critical for the overall efficiency of the method. See Section 11.4 below.
The inequality Lagrangian multipliers update in step (ii) is motivated by the fact that if x+1, v+1 solve (9) exactly in iteration , we obtain
x F (x+1,u+1,v+1,U+1,p,P) =0 .  
Details can be found, for example, in Stingl (2006).
In practise, numerical studies showed that it is not advantageous to do the full updates of multipliers u, U. Firstly, big changes in the multipliers may lead to a large number of iterations in subsequent solution of (9) and, secondly, the multipliers might become ill-conditioned after a few steps and the algorithm suffers from numerical instabilities. To overcome these difficulties, a restricted update is performed instead.
New Lagrangian multipliers for (standard) inequalities uk+1, for k=1,2,,mg are limited not to violate the following bound
μg < u k +1 u k < 1 μg  
for a given 0<μg <1 (see U Update Restriction).
A similar strategy is applied to the matrix multipliers Uk+1 as well. For 0<μA<1 (see Umat Update Restriction) set
U k new = U k +1 + μA ( U k - U k +1 ).  
The penalty parameters p,P in step (iii) are updated by some constant factor dependent on the initial penalty parameters p0,P0 and P Update Speed. The update process is stopped when pmin and Pmin are reached (see P Min, Pmat Min).
Additional details about the multiplier and penalty update strategies, as well as local and global convergence properties under standard assumptions can be found in an extensive study Stingl (2006).

11.2 Stopping Criteria

Algorithm 1 is stopped when all the stopping criteria are satisfied to the requested accuracy, these are:
|f(x)-F(x,u,v,U,p,P)| 1+|f(x)| ε1 , ​ (relative duality gap) (10)
|f(x)-f(x-1)| 1+|f(x)| ε1 ,        (relative precision) (11)
and these based on Karush–Kuhn–Tucker (KKT) error measures, to keep the notation simple, formulation (4) is assumed and iteration index is dropped:
f(x)- k=1 mg ukgk(x)+ k=1 mh vkhk(x)- k=1 mA [Uk, xi Ak (x) ] i=1,,n ε2 ,     (optimality) (12)
gk(x) -εfeas ,  |hk(x)| εfeas ,   Ak(x) -εfeas I   for all ​ k ,       (feasibility) (13)
|gk(x)uk| ε2 ,   |hk(x)vk| ε2 ,   | Ak(x) ,Uk| ε2 .          (complementarity) (14)
Here ε1, ε2, εfeas may be set in the option settings as Stop Tolerance 1, Stop Tolerance 2 and Stop Tolerance Feasibility, respectively.
Note that if Task=FEASIBLE POINT, only the feasibility is taken into account.
There is an option for linear SDP problems to switch from stopping criteria based on the KKT conditions to DIMACS Measures, see Mittelmann (2003). This is the default choice. To keep the notation readable, these are defined here only for the following simpler formulation of linear SDP rather than (2):
minimize xn cTx subject to   A(x) = i=1 n xi Ai - A0 0 . (15)
In this case the algorithm stops when:
Derr1 = A*(U)-c 1+c Derr2 = max(0, - λmin (U) 1+c ) Derr4 = max(0, - λmin ( i=1 n xi Ai -A0) 1+A0 ) Derr5 = A0,U - cT x 1+ |A0,U| + cT x Derr6 = i=1 n xiAi - A0 ,U 1 + |A0,U| + |cTx| (16)
where A*(·) denote the adjoint operator to A(·), [A*(U)]i=Ai,U.
They can be viewed as a scaled version of the KKT conditions. Derr1 represents the (scaled) norm of the gradient of the Lagrangian, Derr2 and Derr4 the dual and primal infeasibility, respectively, and Derr5 and Derr6 measure the duality gap and the complementary slackness. Note that in this solver Derr2=0 by definition and Derr3 is automaticaly zero because the formulation involves slack variables which are not used here.

11.3 Choice of penalty functions φg and φA

To treat the (standard) inequality constraints gk(x)0, we use the penalty/barrier function proposed by Ben–Tal and Zibulevsky (1997):
φg(τ) = { -τ + 12 τ2 if ​ τ τ¯ - (1-τ¯) 2 log( 1-2τ¯+τ 1-τ¯ ) -τ¯ + 12 τ¯2 if ​τ>τ¯;  
with default τ¯=12.
The choice of φA (and thus of ΦP) is motivated by the complexity of the evaluation of ΦP and its derivatives. If φA is defined as
φA(τ) = 1 1+τ - 1 ,  
it is possible to avoid the explicit eigenvalue decomposition in (5) as it can be seen in the formulae below (note that index k is omitted):
ΦP (A(x)) = P2 Z(x) -PI xi ΦP (A(x)) = - P2 Z(x) A(x) xi Z(x) 2 xi xj ΦP(A(x)) = P2 Z(x) ( A(x) xi Z(x) A(x) xj - 2 A(x) xi xj + A(x) xj Z(x) A(x) xi ) Z(x) (17)
where
Z(x) = (A(x)+PI)-1 . (18)
For details follow Kočvara and Stingl (2003). Note that, in particular, formula (17) requires nontrivial computational resources even if careful handling of the sparsity of partial derivatives of A(x) is implemented. e04svf uses a set of strategies described in Fujisawa et al. (1997) adapted for parallel computation.

11.4 Solution of the inner problem

This section describes solving of the inner problem (step (i) of Algorithm 1). We attempt to find an approximate solution of the following system (in x and v) up to the given precision α:
x F(x,u,v,U,p,P) = 0 h(x) = 0 (19)
where the penalty parameters p,P, as well as the Lagrangian multipliers u and U are fixed.
A linesearch SQP framework is used due to its desirable convergence properties. It can be stated as follows.
System (20) is solved by the factorization routine MA97 (see Hogg and Scott (2011), in combination with an inertia correction strategy described in Stingl (2006). The step length selection is guided by Linesearch Mode.
If there are no equality constraints in the problem, the unconstrained minimization in Step (i) of Algorithm 1 simplifies to the modified Newton method with line-search (for details, see Kočvara and Stingl (2003)). Alternatively, the equality constraints hk (x) = 0 can be converted to two inequalities which would be treated with the remaining constraints (see Transform Constraints).

12 Optional Parameters

Several optional parameters in e04svf define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04svf these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling e04zmf anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of AUTO), their value can be retrieved by e04znf. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in Section 12.1.

12.1 Description of the Optional Parameters

For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
All options accept the value DEFAULT to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
DIMACS MeasuresaDefault =CHECK
If the problem is a linear semidefinite programming problem, this parameter specifies if DIMACS error measures (see Section 11.2) should be computed and/or checked. In other cases, this option reverts to NO automatically.
Constraint: DIMACS Measures=COMPUTE, CHECK or NO.
Hessian DensityaDefault =AUTO
This optional parameter guides the solver on how the Hessian matrix of augmented Lagrangian F(x,u,v,U,p,P) should be built. Option AUTO leaves the decision to the solver and it is the recommended option. Setting it to DENSE bypasses the autodetection and the Hessian is always built as a dense matrix. Option SPARSE instructs the solver to use a sparse storage and factorization of the matrix if possible.
Constraint: Hessian Density=AUTO, DENSE or SPARSE
Infinite Bound SizerDefault =1020
This defines the ‘infinite’ bound bigbnd in the definition of the problem constraints. Any upper bound greater than or equal to bigbnd will be regarded as + (and similarly any lower bound less than or equal to -bigbnd will be regarded as -). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
Constraint: Infinite Bound Size1000.
Initial PaDefault =AUTOMATIC
This optional parameter defines the choice of the penalty optional parameters p0, P0, see Algorithm 1.
Initial P=AUTOMATIC
The penalty optional parameters are chosen automatically as set by optional parameter Init Value P, Init Value Pmat and subject to automatic scaling. Note that P0 might be increased so that the penalty function ΦP() is defined for all matrix constraints at the starting point.
Initial P=KEEP PREVIOUS
The penalty optional parameters are kept from the previous run of the solver if possible. If not, this options reverts to AUTOMATIC. Note that even if the matrix penalty optional parameters are the same as in the previous run, they are still subject to a possible increase so that the penalty function ΦP() is well defined at the starting point.
Constraint: Initial P=AUTOMATIC or KEEP PREVIOUS.
Initial UaDefault =AUTOMATIC
This parameter guides the solver on which initial Lagrangian multipliers are to be used.
Initial U=AUTOMATIC
The Lagrangian multipliers are chosen automatically as set by automatic scaling.
Initial U=USER
The values of arrays u and ua (if provided) are used as the initial Lagrangian multipliers subject to automatic adjustments. If one or the other array is not provided, the choice for missing data is as in AUTOMATIC.
Initial U=KEEP PREVIOUS
The Lagrangian multipliers are kept from the previous run of the solver. If this option is set for the first run or optional parameters change the approach of the solver, the choice automatically reverts to AUTOMATIC. This might be useful if the solver is hot started, for example, to achieve higher precision of the solution.
Constraint: Initial U=AUTOMATIC, USER or KEEP PREVIOUS.
Initial XaDefault =USER
This parameter guides which starting point x0 is to be used.
Initial X=AUTOMATIC
The starting point is chosen automatically so that it satisfies simple bounds on the variables or as a zero vector. Input of argument x is ignored.
Initial X=USER
Initial values of argument x are used as a starting point.
Constraint: Initial X=AUTOMATIC or USER.
Init Value PrDefault =1.0
This parameter defines the value p0, the initial penalty optional parameter for (standard) inequalities. A low value of the penalty causes the solution of the inner problem to be closer to the feasible region and thus to the desirable result. However, it also increases ill-conditioning of the system. It is not advisable to set the penalty too low unless a good starting point is provided.
Constraint: ε4Init Value P104.
Init Value PmatrDefault =1.0
The value of this option suggests P0, the initial penalty optional parameter for matrix inequalities. It is similar to Init Value P (and the same advice applies), however, P0 gets increased automatically if the matrix constraints are more infeasible than the actual penalty optional parameter.
Constraint: ε4Init Value Pmat104.
Inner Iteration LimitiDefault =100
The maximum number of the inner iterations (Newton steps) to be performed by Algorithm 2 in each outer iteration. Setting the option too low might lead to ifail=23. Values higher than 100 are unlikely to improve convergence.
Constraint: Inner Iteration Limit>0.
Inner Stop CriteriaaDefault =HEURISTIC
The precision α for the solution of the inner subproblem is determined in Algorithm 1 and under typical circumstances Algorithm 2 is expected to reach this precision within the given Inner Iteration Limit. If any problems are detected and Inner Stop Criteria=HEURISTIC, Algorithm 2 is allowed to stop before reaching the requested precision or the Inner Iteration Limit. This usually saves many unfruitful iterations and the solver may recover in the following iterations. If you suspect that the heuristic problem detection is not suitable for your problem, setting Inner Stop Criteria=STRICT disallows such behaviour.
Constraint: Inner Stop Criteria=HEURISTIC or STRICT.
Inner Stop TolerancerDefault =10-2
This option sets the required precision α0 for the first inner problem solved by Algorithm 2. The precison of the solution of the inner problem does not need to be very high in the first outer iterations and it is automatically adjusted through the outer iterations to reach the optimality limit ε2 in the last one.
Setting α0 too restrictive (too low) causes an increase of the number of inner iterations needed in the first outer iterations and might lead to ifail=23. In certain cases it might be helpful to use a more relaxed (higher) α0 and increase P Update Speed which should reduce the number of inner iterations needed at the beginning of the computation in exchange for a possibly higher number of the outer iterations.
Constraint: ε<Inner Stop Tolerance103.
Linesearch ModeaDefault =AUTO
This controls the step size selection in Algorithm 2. If Linesearch Mode=FULLSTEP (the default for linear problems), unit steps are taken where possible and the step shortening takes place only to avoid undefined regions for the matrix penalty function ΦP() (see (17)). This may be used for linear problems but it is not recommended for any nonlinear ones. If Linesearch Mode=ARMIJO, Armijo backtracking linesearch is used instead which is a fairly basic linesearch. If Linesearch Mode=GOLDSTEIN, a cubic safe guarded linesearch based on Goldstein condition is employed, this is the recommended (and default) choice for nonlinear problems.
Constraint: Linesearch Mode=AUTO, FULLSTEP, ARMIJO or GOLDSTEIN.
ListaDefault =NO
This parameter may be set to YES if you wish to turn on printing of each optional parameter specification as it is supplied.
Constraint: List=YES or NO
Monitor FrequencyiDefault =0
If Monitor Frequency>0, the solver returns to you at the end of every ith outer iteration. During these intermediate exits, the current point x and Lagrangian multipliers u, ua (if requested) are provided as well as the statistics and error measures (rinfo, stats). Argument inform helps to distinguish between intermediate and final exits and also allows immediate termination.
If Monitor Frequency=0, the solver stops only once on the final point and no intermediate exits are made.
Constraint: Monitor Frequency0.
Monitoring FileiDefault =-1
If i0, the unit number for the secondary (monitoring) output. If set to -1, no secondary output is provided. The following information is output to the unit:
Constraint: Monitoring File-1.
Monitoring LeveliDefault =4
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with Print Level.
Constraint: 0Monitoring Level5.
Outer Iteration LimitiDefault =100
The maximum number of the outer iterations to be performed by Algorithm 1. If Outer Iteration Limit=0, no iteration is performed, only quantities needed in the stopping criteria are computed and returned in rinfo. This might be useful in connection with Initial X=USER and Initial U=USER to check optimality of the given point. However, note that the rules for possible modifications of the starting point still apply, see u and ua. Setting the option too low might lead to ifail=22.
Constraint: Outer Iteration Limit0.
P MinrDefault =ε
This controls pmin, the lowest possible penalty value p used for (standard) inequalities. In general, very small values of the penalty optional parameters cause ill-conditioning which might lead to numerical difficulties. On the other hand, very high pmin prevents the algorithm from reaching the requested accuracy on the feasibility. Under normal circumstances, the default value is recommended.
Constraint: εP Min10-2.
Pmat MinrDefault =ε
This is an equivalent of P Min for the minimal matrix penalty optional parameter Pmin. The same advice applies.
Constraint: εPmat Min10-2.
PreferenceaDefault =SPEED
This option affects how contributions from the matrix constraints (17) to the system Hessian matrix are computed. The default option of Preference=SPEED should be suitable in most cases. However, dealing with matrix constraints of a very high dimension may cause noticable memory overhead and switching to Preference=MEMORY may be required.
Constraint: Preference=SPEED or MEMORY.
Presolve Block DetectaDefault =YES
If Presolve Block Detect=YES, the matrix constraints are checked during preprocessoring to determine if they can be split into smaller independent ones, thus speeding up the solver.
Constraint: Presolve Block Detect=YES or NO.
Print FileiDefault =advisory message unit number
If i0, the unit number for the primary output of the solver. If Print File=-1, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by x04abf at the time of the optional parameters initialization, e.g., at the initialization of the handle. The following information is output to the unit:
Constraint: Print File-1.
Print LeveliDefault =2
This parameter defines how detailed information should be printed by the solver to the primary output.
i Output
0 No output from the solver
1 Only the final status and the objective value
2 Problem statistics, one line per outer iteration showing the progress of the solution, final status and statistics
3 As level 2 but detailed output of the outer iterations is provided and brief overview of the inner iterations
4, 5 As level 3 but details of the inner iterations are printed as well
Constraint: 0Print Level5.
Print OptionsaDefault =YES
If Print Options=YES, a listing of optional parameters will be printed to the primary output.
Constraint: Print Options=YES or NO.
P Update SpeediDefault =12
This option affects the rate at which the penalty optional parameters p,P are updated (Algorithm 1, step (iii)) and thus indirectly influences the overall number of outer iterations. Its value can be interpretted as the typical number of outer iterations needed to get from the initial penalty values p0, P0 half-way to the pmin and Pmin. Values smaller than 3 causes a very agressive penalty update strategy which might lead to the increased number of inner iterations and possibly to numerical difficulties. On the other hand, values higher than 15 produce a relatively conservative approach which leads to a higher number of the outer iterations.
If the solver encounters difficulties on your problem, a higher value might help. If your problem is working fine, setting a lower value might increase the speed.
Constraint: 1P Update Speed100.
Stats TimeaDefault =NO
This parameter turns on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice YES is equivalent to WALL CLOCK.
Constraint: Stats Time=YES, NO, CPU or WALL CLOCK.
Stop CriteriaaDefault =SOFT
If Stop Criteria=SOFT, the solver is allowed to stop prematurely with a suboptimal solution, ifail=50, if it predicts that a better estimate of the solution cannot be reached. This is the recommended option.
Constraint: Stop Criteria=SOFT or STRICT.
Stop Tolerance 1rDefault =max(10-6,ε)
This option defines ε1 used as a tolerance for the relative duality gap (10) and the relative precision (11), see Section 11.2.
Constraint: Stop Tolerance 1>ε.
Stop Tolerance 2rDefault =max(10-7,ε)
This option sets the value ε2 which is used for optimality (12) and complementarity (14) tests from KKT conditions or if DIMACS Measures=Check for all DIMACS error measures instead. See Section 11.2.
Constraint: Stop Tolerance 2>ε.
Stop Tolerance FeasibilityrDefault =max(10-7,ε)
This parameter places an acceptance limit on the feasibility of the solution (13), εfeas. See Section 11.2.
Constraint: Stop Tolerance Feasibility>ε.
TaskaDefault =MINIMIZE
This parameter specifies the required direction of the optimization. If Task=FEASIBLE POINT, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function was set, Task reverts to FEASIBLE POINT automatically.
Constraint: Task=MINIMIZE, MAXIMIZE or FEASIBLE POINT.
Transform ConstraintsaDefault =AUTO
This parameter controls how equality constraints are treated by the solver. If Transform Constraints=EQUALITIES, all equality constraints hk(x)=0 from (4) are treated as two inequalities hk(x)0 and hk(x)0, see Section 11.4. This is the default and the only option in this release for equality constrained problems.
Constraint: Transform Constraints=AUTO, NO or EQUALITIES.
U Update RestrictionrDefault =0.5
This defines the value μg giving the bounds on the updates of Lagrangian multipliers for (standard) inequalities between the outer iterations. Values close to 1 limit the changes of the multipliers and serve as a kind of smoothing, lower values allow more significant changes.
Based on numerical experience, big variation in the multipliers may lead to a large number of iterations in the subsequent step and might disturb the convergence due to ill-conditioning.
It might be worth experimenting with the value on your particular problem. Mid range values are recommended over the more extremal ones.
Constraint: ε<U Update Restriction<1.
Umat Update RestrictionrDefault =0.3
This is an equivalent of U Update Restriction for matrix constraints, denoted as μA in Section 11.1. The advice above applies equally.
Constraint: ε<Umat Update Restriction<1.