NAG Library Routine Document
e04svf
(handle_solve_pennon)
Note: this routine uses optional parameters to define choices in the problem specification and in the details of the algorithm. If you wish to use default
settings for all of the optional parameters, you need only read Sections 1 to 10 of this document. If, however, you wish to reset some or all of the settings please refer to Section 11 for a detailed description of the algorithm and to Section 12 for a detailed description of the specification of the optional parameters.
1
Purpose
e04svf is a solver from the NAG optimization modelling suite for problems such as, quadratic programming (QP), linear semidefinite programming (SDP) and semidefinite programming with bilinear matrix inequalities (BMISDP).
2
Specification
Fortran Interface
Subroutine e04svf ( 
handle,
nvar,
x,
nnzu,
u,
nnzuc,
uc,
nnzua,
ua,
rinfo,
stats,
inform,
ifail) 
Integer, Intent (In)  :: 
nvar,
nnzu,
nnzuc,
nnzua  Integer, Intent (Inout)  :: 
inform,
ifail  Real (Kind=nag_wp), Intent (Inout)  :: 
x(nvar),
u(nnzu),
uc(nnzuc),
ua(nnzua)  Real (Kind=nag_wp), Intent (Out)  :: 
rinfo(32),
stats(32)  Type (c_ptr), Intent (In)  :: 
handle 

C Header Interface
#include nagmk26.h
void 
e04svf_ (
void **handle,
const Integer *nvar,
double x[],
const Integer *nnzu,
double u[],
const Integer *nnzuc,
double uc[],
const Integer *nnzua,
double ua[],
double rinfo[],
double stats[],
Integer *inform,
Integer *ifail) 

3
Description
e04svf serves as a solver for compatible problems stored as a handle. The handle points to an internal data structure which defines the problem and serves as means of communication for routines in the suite. First, the problem handle is initialized by
e04raf. Then some of the routines
e04ref,
e04rff,
e04rhf,
e04rjf,
e04rnf or
e04rpf may be used to formulate the objective function, (standard) constraints and matrix constraints of the problem. Once the problem is fully set, the handle may be passed to the solver. When the handle is not needed anymore,
e04rzf should be called to destroy it and deallocate the memory held within. See
e04raf for more details.
Problems which can be defined this way are, for example, (generally nonconvex) quadratic programming (QP)
linear semidefinite programming problems (SDP)
or semidefinite programming problems with bilinear matrix inequalities (BMISDP)
Here
$c$,
${l}_{x}$ and
${u}_{x}$ are
$n$dimensional vectors,
$H$ is a symmetric
$n$ by
$n$ matrix,
${l}_{B}$,
${u}_{B}$ are
${m}_{B}$dimensional vectors,
$B$ is a general
${m}_{B}$ by
$n$ rectangular matrix and
${A}_{i}^{k}$,
${Q}_{ij}^{k}$ are symmetric matrices. The expression
$S\u2ab00$ stands for a constraint on eigenvalues of a symmetric matrix
$S$, namely, all the eigenvalues should be nonnegative, i.e., the matrix should be positive semidefinite. Note that models with fixed variables are not allowed with this solver in this release and will result in
${\mathbf{ifail}}={\mathbf{2}}$, however, the limitation will be removed in a future release. See relevant routines in the suite for more details on the problem formulation.
The solver is based on a generalized Augmented Lagrangian method with a suitable choice of standard and matrix penalty functions. For a detailed description of the algorithm see
Section 11. Under standard assumptions on the problem (Slater constraint qualification, boundedness of the objective function on the feasible set, see
Stingl (2006) for details) the algorithm converges to a local solution. In case of convex problems such as linear SDP or convex QP, this is the global solution. The solver is suitable for both small dense and largescale sparse problems.
The algorithm behaviour and solver strategy can be modified by various optional parameters (see
Section 12) which can be set by
e04zmf and
e04zpf anytime between the initialization of the handle by
e04raf and a call to the solver. Once the solver has finished, options may be modified for the next solve. The solver may be called repeatedly with various starting points and/or optional parameters.
There are several optional parameters with a multiple choice where the default choice is
$\mathrm{AUTO}$ (for example,
Hessian Density). This value means that the decision over the option is left to the solver based on the structure of the problem. Option getter
e04znf can be called to retrieve the choice of these options as well as on any other options.
Optional parameter
Task may be used to switch the problem to maximization or to ignore the objective function and find only a feasible point.
Optional parameter
Monitor Frequency may be used to turn on the monitor mode of the solver. The solver invoked in this mode pauses regularly even before the optimal point is found to allow monitoring the progress from the calling program. All the important error measures and statistics are available in the calling program which may terminate the solver early if desired (see argument
inform).
3.1
Structure of the Lagrangian Multipliers
The algorithm works internally with estimates of both the decision variables, denoted by
$x$, and the Lagrangian multipliers (dual variables) for standard and matrix constraints, denoted by
$u$ and
$U$, respectively. You may provide initial estimates, request approximations during the run (the monitor mode turned on) and obtain the final values. The Lagrangian multipliers are split into two arrays, the multipliers
$u$ for (standard) constraints are stored in array
u and multipliers
$U$ for matrix constraints in array
ua. Both arrays need to conform to the structure of the constraints.
If the simple bounds were defined (
e04rhf was successfully called), the first
$2n$ elements of
u belong to the corresponding Lagrangian multipliers, interleaving a multiplier for the lower and for the upper bound for each
${x}_{i}$. If any of the bounds were set to infinity, the corresponding Lagrangian multipliers are set to
$0$ and may be ignored.
Similarly, the following
$2{m}_{B}$ elements of
u belong to multipliers for the linear constraints, if formulated by
e04rjf. The organization is the same, i.e., the multipliers for each constraint for the lower and upper bounds are alternated and zeroes are used for any missing (infinite bound) constraint.
A Lagrangian multiplier for a matrix constraint (one block) of dimension
$d$ by
$d$ is a dense symmetric matrix of the same dimension. All multipliers
$U$ are stored next to each other in array
ua in the same order as the matrix constraints were defined by
e04rnf and
e04rpf. The lower triangle of each is stored in the packed column order (see
Section 3.3.2 in the F07 Chapter Introduction). For example, if there are four matrix constraints of dimensions
$7$,
$3$,
$1$,
$1$, the dimension of array
ua should be
$36$. The first
$28$ elements
$\left({d}_{1}\times \left({d}_{1}+1\right)/2\right)$ belong to the packed lower triangle of
${U}_{1}$, followed by six elements of
${U}_{2}$ and one element for each
${U}_{3}$ and
${U}_{4}$. See for example
Section 10 in
e04rdf.
3.2
Approximation of the Lagrangian Multipliers
By the nature of the algorithm, all inequality Lagrangian multiplier
$u,U$ are always kept positive during the computational process. This applies even to Lagrangian multipliers of inactive constraints at the solution. They will only be close to zero although they would normally be equal to zero exactly. This is one of the major differences between results from solvers based on the active set method (such as
e04nqf) and others, such as,
e04svf or interior point methods. As a consequence, the initial estimate of the multipliers (if provided) might be adjusted by the solver to be sufficiently positive, also the estimates returned during the intermediate exits might only be a very crude approximation to their final values as they do not satisfy all the Karush–Kuhn–Tucker (KKT) conditions.
Another difference is that
e04nqf merges multipliers for both lower and upper inequality into one element whose sign determines the inequality because there can be at most one active constraint and multiplier for the inactive is exact zero. Negative multipliers are associated with the upper bounds and positive with the lower bounds. On the other hand,
e04svf works with both multipliers at the same time so they are returned in two elements, one for the lower bound, the other for the upper bound (see
Section 3.1). An equivalent result can be achieved by subtracting the upper bound multiplier from the lower one. This holds even when equalities are interpreted as two inequalities (see optional parameter
Transform Constraints).
4
References
Ben–Tal A and Zibulevsky M (1997) Penalty/barrier multiplier methods for convex programming problems SIAM Journal on Optimization 7 347–366
Fujisawa K, Kojima M, Nakata K (1997) Exploiting sparsity in primaldual interiorpoint method for semidefinite programming Math. Programming 79 235–253
Hogg J D and Scott J A (2011) HSL MA97: a bitcompatible multifrontal code for sparse symmetric systems RAL Technical Report. RALTR2011024
Kočvara M and Stingl M (2003) PENNON – a code for convex nonlinear and semidefinite programming Optimization Methods and Software 18(3) 317–333
Kočvara M and Stingl M (2007) On the solution of largescale SDP problems by the modified barrier method using iterative solvers Math. Programming (Series B) 109(2–3) 413–444
Mittelmann H D (2003) An independent benchmarking of SDP and SOCP solvers Math. Programming 95 407–430
Stingl M (2006) On the Solution of Nonlinear Semidefinite Programs by Augmented Lagrangian Methods, PhD thesis Institute of Applied Mathematics II, Friedrich–Alexander University of Erlangen–Nuremberg
5
Arguments
 1: $\mathbf{handle}$ – Type (c_ptr)Input

On entry: the handle to the problem. It needs to be initialized by
e04raf and the problem formulated by some of the routines
e04ref,
e04rff,
e04rhf,
e04rjf,
e04rnf and
e04rpf. It
must not be changed between the calls.
 2: $\mathbf{nvar}$ – IntegerInput

On entry:
$n$, the number of decision variables
$x$ in the problem. It must be unchanged from the value set during the initialization of the handle by
e04raf.
 3: $\mathbf{x}\left({\mathbf{nvar}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

Note: intermediate stops take place only if ${\mathbf{Monitor\; Frequency}}>0$.
On entry: if
${\mathbf{Initial\; X}}=\mathrm{USER}$ (the default),
${x}^{0}$, the initial estimate of the variables
$x$, otherwise
x need not be set.
On intermediate exit:
the value of the variables $x$ at the end of the current outer iteration.
On intermediate reentry: the input is ignored.
On final exit: the final value of the variables $x$.
 4: $\mathbf{nnzu}$ – IntegerInput

On entry: the dimension of array
u.
If
${\mathbf{nnzu}}=0$,
u will not be referenced; otherwise it needs to match the dimension of constraints defined by
e04rhf and
e04rjf as explained in
Section 3.1.
Constraint:
${\mathbf{nnzu}}\ge 0$.
 5: $\mathbf{u}\left({\mathbf{nnzu}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

Note: intermediate stops take place only if ${\mathbf{Monitor\; Frequency}}>0$.
If
${\mathbf{nnzu}}>0$,
u holds Lagrangian multipliers (dual variables) for (standard) constraints, i.e., simple bounds defined by
e04rhf and a set of
${m}_{B}$ linear constraints defined by
e04rjf. Either their initial estimates, intermediate approximations or final values, see
Section 3.1.
If
${\mathbf{nnzu}}=0$,
u will not be referenced.
On entry: if
${\mathbf{Initial\; U}}=\mathrm{USER}$ (the default is
$\mathrm{AUTOMATIC}$),
${u}^{0}$, the initial estimate of the Lagrangian multipliers
$u$, otherwise
u need not be set.
On intermediate exit:
the estimate of the multipliers $u$ at the end of the current outer iteration.
On intermediate reentry: the input is ignored.
On exit: the final value of multipliers $u$.
 6: $\mathbf{nnzuc}$ – IntegerInput

On entry: the dimension of array
uc. If
${\mathbf{nnzuc}}=0$,
uc will not be referenced. This argument is reserved for future releases of the NAG Library which will allow definition of second order cone constraints. It needs to be set to
$0$ at the moment.
Constraint:
${\mathbf{nnzuc}}=0$.
 7: $\mathbf{uc}\left({\mathbf{nnzuc}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

uc is reserved for future releases of the NAG Library which will allow definition of second order cone constraints. It is not referenced at the moment.
 8: $\mathbf{nnzua}$ – IntegerInput

On entry: the dimension of array
ua. If
${\mathbf{nnzua}}=0$,
ua will not be referenced; otherwise it needs to match the total number of nonzeros in all matrix Lagrangian multipliers (constraints defined by
e04rnf and
e04rpf) as explained in
Section 3.1.
Constraint:
${\mathbf{nnzua}}\ge 0$.
 9: $\mathbf{ua}\left({\mathbf{nnzua}}\right)$ – Real (Kind=nag_wp) arrayInput/Output

Note: intermediate stops take place only if ${\mathbf{Monitor\; Frequency}}>0$.
If
${\mathbf{nnzua}}>0$,
ua holds the Lagrangian multipliers for matrix constraints defined by
e04rnf and
e04rpf, see
Section 3.1.
If
${\mathbf{nnzua}}=0$,
ua will not be referenced.
On entry: if
${\mathbf{Initial\; U}}=\mathrm{USER}$ (the default is
$\mathrm{AUTOMATIC}$),
${U}^{0}$, the initial estimate of the matrix Lagrangian multipliers
$U$, otherwise
ua need not be set.
On intermediate exit:
the estimate of the matrix multipliers $U$ at the end of the outer iteration.
On intermediate reentry: the input is ignored.
On final exit: the final estimate of the multipliers $U$.
 10: $\mathbf{rinfo}\left(32\right)$ – Real (Kind=nag_wp) arrayOutput

On intermediate or final entry: error measures and various indicators (see
Section 11 for details) at the end of the current (or final) outer iteration as given in the table below:
$1$ 
objective function value $f\left(x\right)$ 
$2$ 
optimality (12) 
$3$ 
feasibility (13) 
$4$ 
complementarity (14) 
$5$ 
minimum penalty 
$6$ 
relative precision (11) 
$7$ 
relative duality gap (10) 
$8$ 
precision $\leftf\left({x}^{\ell}\right)f\left({x}^{\ell +1}\right)\right$ 
$9$ 
duality gap 
$10$ 
minimum penalty for (standard) inequalities $p$ 
$11$ 
minimum penalty for matrix inequalities $P$ 
$12$ 
feasibility of equality constraints 
$13$ 
feasibility of (standard) inequalities 
$14$ 
feasibility of matrix inequalities 
$15$ 
complementarity of equality constraints 
$16$ 
complementarity of (standard) inequalities 
$17$ 
complementarity of matrix inequalities 
$18$–$23$ 
DIMACS error measures (16) (only if turned on by DIMACS Measures) 
$24$–$32$ 
reserved for future use 
 11: $\mathbf{stats}\left(32\right)$ – Real (Kind=nag_wp) arrayOutput

On intermediate or final exit: solver statistics at the end of the current (or final) outer iteration as given in the table below. Note that time statistics is provided only if
Stats Time is set (the default is
$\mathrm{NO}$), the measured time is returned in seconds.
$1$ 
number of the outer iterations 
$2$ 
total number of the inner iterations 
$3$ 
total number of the linesearch steps 
$4$ 
number of evaluations of the augmented Lagrangian $F\left(\right)$, (see (8)) 
$5$ 
number of evaluations of $\nabla F\left(\right)$ 
$6$ 
number of evaluations of ${\nabla}^{2}F\left(\right)$ 
$7$ 
reserved for future use 
$8$ 
total running time of the solver 
$9$ 
total running time of the solver without evaluations of the user's functions and monitoring stops 
$10$ 
time spent in the inner iterations 
$11$ 
time spent in Lagrangian multipliers updates 
$12$ 
time spent in penalty parameters updates 
$13$ 
time spent in matrix feasibility computation 
$14$ 
time of evaluations of $F\left(\right)$ 
$15$ 
time of evaluations of $\nabla F\left(\right)$ 
$16$ 
time of evaluations of ${\nabla}^{2}F\left(\right)$ 
$17$ 
time of factorizations of the Newton system 
$18$ 
time of factorizations of the matrix constraints 
$19$–$32$ 
reserved for future use 
 12: $\mathbf{inform}$ – IntegerInput/Output

Note: intermediate stops take place only if ${\mathbf{Monitor\; Frequency}}>0$.
On initial entry: no effect.
On intermediate exit:
${\mathbf{inform}}=1$.
On intermediate reentry: if set to $0$, solving the current problem is terminated and the routine returns ${\mathbf{ifail}}={\mathbf{20}}$; otherwise the routine continues.
On final exit: ${\mathbf{inform}}=0$.
 13: $\mathbf{ifail}$ – IntegerInput/Output

On entry:
ifail must be set to
$0$,
$1\text{ or}1$. If you are unfamiliar with this argument you should refer to
Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$1\text{ or}1$ is recommended. If the output of error messages is undesirable, then the value
$1$ is recommended. Otherwise, because for this routine the values of the output arguments may be useful even if
${\mathbf{ifail}}\ne {\mathbf{0}}$ on exit, the recommended value is
$1$.
When the value $\mathbf{1}\text{ or}1$ is used it is essential to test the value of ifail on exit.
On exit:
${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see
Section 6).
6
Error Indicators and Warnings
If on entry
${\mathbf{ifail}}=0$ or
$1$, explanatory error messages are output on the current error message unit (as defined by
x04aaf).
Note: e04svf may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the routine:
 ${\mathbf{ifail}}=1$

The supplied
handle does not define a valid handle to the data structure for the NAG optimization modelling suite. It has not been initialized by
e04raf or it has been corrupted.
 ${\mathbf{ifail}}=2$

This solver does not support fixed variables.
This solver does not support general nonlinear objective and constraints.
 ${\mathbf{ifail}}=3$

A different solver from the suite has already been used.
 ${\mathbf{ifail}}=4$

On entry,
${\mathbf{nvar}}=\u2329\mathit{\text{value}}\u232a$, expected
$\mathrm{value}=\u2329\mathit{\text{value}}\u232a$.
Constraint:
nvar must match the value given during initialization of
handle.
 ${\mathbf{ifail}}=5$

On entry,
${\mathbf{nnzu}}=\u2329\mathit{\text{value}}\u232a$.
nnzu does not match the size of the Lagrangian multipliers for (standard) constraints.
The correct value is 0 for no (standard) constraints.
On entry,
${\mathbf{nnzu}}=\u2329\mathit{\text{value}}\u232a$.
nnzu does not match the size of the Lagrangian multipliers for (standard) constraints.
The correct value is either 0 or
$\u2329\mathit{\text{value}}\u232a$.
On entry,
${\mathbf{nnzua}}=\u2329\mathit{\text{value}}\u232a$.
nnzua does not match the size of the Lagrangian multipliers for matrix constraints.
The correct value is 0 for no matrix constraints.
On entry,
${\mathbf{nnzua}}=\u2329\mathit{\text{value}}\u232a$.
nnzua does not match the size of the Lagrangian multipliers for matrix constraints.
The correct value is either 0 or
$\u2329\mathit{\text{value}}\u232a$.
On entry,
${\mathbf{nnzuc}}=\u2329\mathit{\text{value}}\u232a$.
nnzuc does not match the size of the Lagrangian multipliers for second order cone constraints.
The correct value is 0 for no such constraints.
 ${\mathbf{ifail}}=20$

User requested termination during a monitoring step.
 ${\mathbf{ifail}}=21$

The current starting point is unusable.
The starting point ${x}^{0}$, either provided by the user (if ${\mathbf{Initial\; X}}=\mathrm{USER}$, the default) or the automatic estimate (if ${\mathbf{Initial\; X}}=\mathrm{AUTOMATIC}$), must not be extremely infeasible in the matrix constraints (infeasibility of order ${10}^{6}$ and higher) and all the functions used in the problem formulation must be evaluatable.
In the unlikely case this error is triggered, it is necessary to provide a better estimate of the initial values.
 ${\mathbf{ifail}}=22$

Outer iteration limit has been reached.
The requested accuracy is not achieved.
If ${\mathbf{Outer\; Iteration\; Limit}}$ is left to the default, this error indicates numerical difficulties. Consider whether the stopping tolerances (${\mathbf{Stop\; Tolerance\; 1}}$, ${\mathbf{Stop\; Tolerance\; 2}}$, ${\mathbf{Stop\; Tolerance\; Feasibility}}$) are set too low or optional parameters affecting the behaviour of the penalty updates (${\mathbf{P\; Update\; Speed}}$, ${\mathbf{P\; Min}}$ or ${\mathbf{Pmat\; Min}}$) have been modified inadvisedly. The iteration log should reveal more about the misbehaviour. Providing a different starting point might be of help in certain situations.
 ${\mathbf{ifail}}=23$

The inner subproblem could not be solved to the required accuracy.
Inner iteration limit has been reached.
The inner subproblem could not be solved to the required accuracy.
Limited progress in the inner subproblem triggered a stop (heuristic inner stop criteria).
The inner subproblem could not be solved to the required accuracy.
Line search or another internal component failed.
A problem with the convergence of the inner subproblem is typically a sign of numerical difficulties of the whole algorithm. The inner subproblem might be stopped before reaching the required accuracy because of the ${\mathbf{Inner\; Iteration\; Limit}}$, a heuristic detected no progress in the inner iterations (if ${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{HEURISTIC}$, default) or if an internal component failed (for example, line search was unable to find a suitable step). The algorithm tries to recover, however, it might give up after several attempts with one of these error messages.
If it occurs in the very early iterations, consider increasing ${\mathbf{Inner\; Stop\; Tolerance}}$ and possibly ${\mathbf{Init\; Value\; P}}$ or ${\mathbf{Init\; Value\; Pmat}}$ which should ease the first iterations. An occurrence in later iterations indicates numerical difficulties typically due to scaling and/or illconditioning or the problem is close to infeasible. Reducing the tolerance on the stopping criteria or increasing ${\mathbf{P\; Update\; Speed}}$ might be of limited help.
 ${\mathbf{ifail}}=24$

Unable to make progress, the algorithm was stopped.
This error is returned if the solver cannot decrease the duality gap over a range of iterations. This can be due to the scaling of the problem or the problem might be close to primal or dual infeasibility.
 ${\mathbf{ifail}}=50$

The algorithm converged to a suboptimal solution.
The full accuracy was not achieved. The solution should still be usable.
This error may be reported only if ${\mathbf{Stop\; Criteria}}=\mathrm{SOFT}$ (default). The solver predicted that it is unable to reach a better estimate of the solution. However, the error measures indicate that the point is a reasonable approximation. Typically, only the norm of the gradient of the Lagrangian (optimality) does not fully satisfy the requested tolerance whereas the others are well below the tolerance.
Setting ${\mathbf{Stop\; Criteria}}=\mathrm{STRICT}$ will disallow this error but it is unlikely that the algorithm would reach a better solution.
 ${\mathbf{ifail}}=51$

The problem was found to be infeasible during preprocessing.
One or more of the constraints (or its part after preprocessing) violates the constraints by more than ${\epsilon}_{\mathrm{feas}}$ (${\mathbf{Stop\; Tolerance\; Feasibility}}$).
 ${\mathbf{ifail}}=52$

The problem was found unbounded during preprocessing.
The objective function consists of an unrestricted ray and thus the problem does not have a solution.
 ${\mathbf{ifail}}=53$

The problem seems to be infeasible, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is infeasible, several indirect indicators suggest that it might be the case.
 ${\mathbf{ifail}}=54$

The problem seems to be unbounded, the algorithm was stopped.
Whilst the algorithm cannot definitively detect that the problem is unbounded, several indirect indicators (such as a rapid decrease in the objective function and a lack of convergence in the inner subproblem) suggest that this might be the case. A good scaling of the objective function is always highly recommended to avoid situations when unusual behavior triggers falsely this error exit.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
See
Section 3.9 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
See
Section 3.8 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
See
Section 3.7 in How to Use the NAG Library and its Documentation for further information.
7
Accuracy
The accuracy of the solution is driven by optional parameters
Stop Tolerance 1,
Stop Tolerance 2,
Stop Tolerance Feasibility and
Stop Criteria and in certain cases
DIMACS Measures.
If
${\mathbf{ifail}}={\mathbf{0}}$ on the final exit, the returned point satisfies Karush–Kuhn–Tucker (KKT) conditions to the requested accuracy (under the default settings close to
$\sqrt{\epsilon}$) and thus it is a good estimate of a local solution. If
${\mathbf{ifail}}={\mathbf{50}}$, some of the convergence conditions were not fully satisfied but the point still seems to be a reasonable estimate and should be usable. Please refer to
Section 11.2 and the description of the particular options.
8
Parallelism and Performance
e04svf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
e04svf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the
X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the
Users' Note for your implementation for any additional implementationspecific information.
9.1
Description of the Printed Output
The solver can print information to give an overview of the problem and of the progress of the computation. The output may be send to two independent
unit numbers
which are set by optional parameters
Print File and
Monitoring File. Optional parameters
Print Level,
Print Options and
Monitoring Level determine the exposed level of detail. This allows, for example, to generate a detailed log in a file while the condensed information is displayed on the screen.
By default (${\mathbf{Print\; File}}=6$, ${\mathbf{Print\; Level}}=2$), four sections are printed to the standard output: a header, a list of options, an iteration log and a summary.
Header
The header contains statistics about the size of the problem as represented internally, i.e., it reflects any changes imposed by preprocessing and problem transformations (see, for example,
Presolve Block Detect and
Transform Constraints). The header may look like:
E04SV, NLPSDP Solver (Pennon)

Number of variables 2 [eliminated 0]
simple linear nonlin
(Standard) inequalities 3 0 0
(Standard) equalities 0 0
Matrix inequalities 1 1 [dense 2, sparse 0]
[max dimension 3]
It shows the total number of variables and how many of them were eliminated (e.g., fixed by a simple equality). A constraint with both a lower and an upper bound counts as 2 inequalities. Simple bounds are set by
e04rhf, matrix inequalities by
e04rnf and
e04rpf and standard equalities and inequalities by
e04rjf. Note that matrix constraints of dimension 1 are extracted and treated as (standard) inequalities as well. The header report concludes with the number of matrix constraints factorized as dense and sparse matrices, together with the largest dimension of the matrix inequalities.
Optional parameters list
The list shows all options of the solver, each displayed on one line. The line contains the option name, its current value and an indicator for how it was set. The options left at their defaults are noted by (d), the ones set by the user are noted by (U) and the options reset by the solver by (S). The solver will automatically set options which are set to
$\mathrm{AUTO}$ or options which are not possible to satisfy in the given context (e.g., requesting
DIMACS Measures for a nonlinear problem). Note that the output format is compatible with the file format expected by
e04zpf. The output might look as follows:
Outer Iteration Limit = 20 * U
Stop Tolerance 1 = 1.00000E06 * d
Stop Tolerance 2 = 1.00000E07 * d
Hessian Density = Dense * S
Iteration log
If
${\mathbf{Print\; Level}}=2$, the status of each major iteration is condensed to one line. The line shows the major iteration number (
$0$ represents the starting point), the current objective value, KKT measures (optimality, feasibility and complementarity), minimal penalty and the number of inner iterations performed. Note that all these values are also available in
${\mathbf{rinfo}}\left(1\right),\dots ,{\mathbf{rinfo}}\left(5\right)$ and
${\mathbf{stats}}\left(1\right)$. The output might look as follows:

it  objective  optim  feas  compl  pen min  inner

0 0.00000E+00 7.34E+00 1.23E01 4.41E+01 1.00E+00 0
1 3.01998E01 2.54E03 0.00E+00 1.89E+00 1.00E+00 6
2 2.53008E+00 1.06E03 1.30E01 3.22E01 3.17E01 8
3 2.08172E+00 6.52E03 1.85E02 4.54E02 1.01E01 7
4 2.01060E+00 6.47E03 4.10E03 1.02E02 3.19E02 3
Occasionally, a one letter flag is printed at the end of the line indicating that the inner subproblem was not solved to the required accuracy. The possibilities are
M for maximum number of inner iterations,
L for difficulties in the line search and
! when a heuristic stop took place. Repeated troubles in the subproblems may lead to
${\mathbf{ifail}}={\mathbf{23}}$. The output below had
${\mathbf{Inner\; Iteration\; Limit}}=5$ which was not enough in the first subproblem (first outer iteration).

it  objective  optim  feas  compl  pen min  inner

0 0.00000E+00 1.46E+03 5.01E+01 1.46E+03 6.40E+01 0
1 3.78981E+02 3.86E+01 0.00E+00 1.21E+04 6.40E+01 5 M
2 9.11724E+02 1.46E02 0.00E+00 9.24E+02 4.45E+01 5
All KKT measures should normally converge to zero as the algorithm progresses and once the requested accuracy (
Stop Tolerance 2) is achieved, the solver stops. However, the convergence is not necessarilly monotonic. The penalty parameters are decreased each major iteration which should improve overall the feasibility of the problem. This also increases the illconditioning which might lead to a higher number of inner iterations. A very high number of inner iterations usually signals numerical difficulties. See
Section 11 for the algorithmic details.
If ${\mathbf{Print\; Level}}>2$, each major iteration produces significantly more detailed output comprising detailed error measures and output from every inner iteration. The output is selfexplanatory so is not featured here in detail.
Summary
Once the solver finishes, a detailed summary is produced. An example is shown below:

Status: converged, an optimal solution found

Final objective value 2.300000E+01
Relative precision 5.873755E09
Optimality 1.756062E10
Feasibility 9.048738E08
Complementarity 1.855566E08
DIMACS error 1 8.780308E11
DIMACS error 2 0.000000E+00
DIMACS error 3 0.000000E+00
DIMACS error 4 4.524369E08
DIMACS error 5 4.065998E10
DIMACS error 6 3.948012E10
Iteration counts
Outer iterations 13
Inner iterations 82
Linesearch steps 95
Evaluation counts
Augm. Lagr. values 96
Augm. Lagr. gradient 96
Augm. Lagr. hessian 82
Timing
Total time 0 h 0 min 3 sec
Evaluations + monitoring 0.04 sec
Solver itself 3.09 sec
Inner minimization step 2.72 sec ( 87.1%)
Augm. Lagr. value 0.28 sec ( 9.0%)
Augm. Lagr. gradient 0.67 sec ( 21.6%)
Augm. Lagr. hessian 1.11 sec ( 35.4%)
system matr. factor. 0.64 sec ( 20.5%)
const. matr. factor. 0.40 sec ( 12.8%)
Multiplier update 0.01 sec ( 0.3%)
Penalty update 0.02 sec ( 0.5%)
Feasibility check 0.15 sec ( 4.7%)

It starts with the status line of the overall result which matches the
ifail value. It is followed by the final objective value and the error measures (including
DIMACS Measures if turned on). Iteration counters, numbers of evaluations of the Augmented Lagrangian function and timing of the routine conclude the section. The timing of the algorithm is displayed only if
Stats Time is set.
9.2
Internal Changes
Internal changes have been made to this routine as follows:
 At Mark 26.1: e04svf cannot, at the moment, handle fixed variables in the model. You are now able to define such a model and e04svf will return ${\mathbf{ifail}}={\mathbf{2}}$ in this case.
For details of all known issues which have been reported for the NAG Library please refer to the
Known Issues list.
10
Example
Semidefinite Programming has many applications in several fields of mathematics, such as, combinatorial optimization, finance, statistics, control theory or structural optimization. However, these applications seldom come in the form of
(2) or
(3). Usually a reformulation is needed or even a relaxation is employed to achieve the desired formulation. This is also the case of the Lovász
$\vartheta $ function computed in this example. See also
Section 10 in
e04raf for links to further examples in the suite.
The Lovász $\vartheta $ function (or also called $\vartheta $ number) of an undirected graph $G=\left(V,E\right)$ is an important quantity in combinatorial optimization. It gives an upper bound to Shannon capacity of the graph $G$ and is also related to the clique number and the chromatic number of the complement of $G$ which are NPhard problems.
The
$\vartheta $ function can be expressed in various ways, here we use the following:
where
$n=\leftV\right$ and
${\mathbb{S}}^{n}$ denotes the space of real symmetric
$n$ by
$n$ matrices. This eigenvalue optimization problem is easy to reformulate as an SDP problem by introducing an artificial variable
$t$ as follows:
Finally, this can be written as
(2)) which is formulated in the example:
where
$J$ is a matrix of all ones and
${E}_{ij}$ is a matrix of all zeros except
$\left(i,j\right)$ and
$\left(j,i\right)$.
The example also demonstrates how to set the optional parameters and how to retrieve them.
The data file stores the Petersen graph whose $\vartheta $ is $4$.
10.1
Program Text
Program Text (e04svfe.f90)
10.2
Program Data
Program Data (e04svfe.d)
10.3
Program Results
Program Results (e04svfe.r)
11
Algorithmic Details
This section contains a description of the algorithm used in
e04svf which is based on the implementation of the code called Pennon. For further details, see
Kočvara and Stingl (2003),
Stingl (2006) and
Kočvara and Stingl (2007).
For simplicity, we will use the following problem formulation; its connection to (SDP) and (BMISDP) is easy to see:
where
$f$,
${g}_{k}$,
${h}_{k}$ are
${\mathcal{C}}^{2}$ functions from
${\mathbb{R}}^{n}$ to
$\mathbb{R}$ and
${\mathcal{A}}_{k}$ is a
${\mathcal{C}}^{2}$ matrix function from
${\mathbb{R}}^{n}$ to
${\mathbb{S}}^{{m}_{k}}$. Here
${\mathbb{S}}^{m}$ denotes the space of real symmetric matrices
$m\times m$ and
$S\in {\mathbb{S}}^{m}$,
$S\u2ab00$ stands for a constraint on eigenvalues of
$S$, namely the matrix
$S$ should be positive semidefinite. Furthermore, we define the inner product on
${\mathbb{S}}^{m}$ by
${\u2329A,B\u232a}_{{\mathbb{S}}^{m}}=\mathrm{trace}\left(AB\right)$. The index
${\mathbb{S}}^{m}$ will be omitted whenever the dimension is clear from the context. Finally, for
$\Phi :{\mathbb{S}}^{m}\to {\mathbb{S}}^{m}$ and
$X\text{,}Y\in {\mathbb{S}}^{m}$,
$D\Phi \left(X;Y\right)$ denotes the directional derivative of
$\Phi $ with respect to
$X$ in direction
$Y$.
11.1
Overview
The algorithm is based on a (generalized) augmented Lagrangian approach and on a suitable choice of smooth penalty/barrier functions ${\phi}_{g}:\mathbb{R}\to \mathbb{R}$ for (standard) inequality constraints and ${\phi}_{A}:\mathbb{R}\to \mathbb{R}$ for constraints on matrix eigenvalues. By means of ${\phi}_{A}$ we define a penalty/barrier function for matrix inequalities as follows.
Let
$A\in {\mathbb{S}}^{m}$ have an eigenvalue decomposition
$A={S}^{\mathrm{T}}\Lambda S$ where
$\Lambda =\mathrm{diag}{\left({\lambda}_{1},{\lambda}_{2},\dots ,{\lambda}_{m}\right)}^{\mathrm{T}}$. We define matrix function
${\Phi}_{P}:{\mathbb{S}}^{m}\to {\mathbb{S}}^{m}$ for
$P>0$ as
Both
${\phi}_{g}$ and
${\phi}_{A}$ satisfy a number of assumptions (see
Kočvara and Stingl (2003)) guaranteeing, in particular, that for any
$p$,
$P>0$
Further in the text, we use simplified notation ${\phi}_{p}(\xb7)=p{\phi}_{g}\left(\xb7/p\right)$.
Thus for any
$p$,
$P>0$, problem
(4) has the same solution as the following augmented problem
The Lagrangian of
(7) can be viewed as a (generalized) augmented Lagrangian of
(4):
where
$u\in {\mathbb{R}}^{{m}_{g}}$,
$v\in {\mathbb{R}}^{{m}_{h}}$ and
$U=\left({U}_{1},\dots ,{U}_{{m}_{A}}\right)$,
${U}_{k}\in {\mathbb{S}}^{{p}_{k}}$,
$k=1,\dots ,{m}_{A}$ are Lagrange multipliers associated with the (standard) inequalities and equalities and the matrix inequality constraints, respectively.
The algorithm combines ideas of the (exterior) penalty and (interior) barrier methods with the augmented Lagrangian method, it can be defined as follows:
 Algorithm 1 (Outer Loop)

Let ${x}^{0}$, ${u}^{0}$, ${v}^{0}$ and ${U}^{0}$ be given. Let ${p}^{0}>0$, ${P}^{0}>0$, ${\alpha}^{0}>0$. For $\ell =0,1,\dots $ repeat until a stopping criteria or maximum number of iterations is reached:
(i) 
Find ${x}^{\ell +1}$, ${v}^{\ell +1}$ satisfying

(ii) 
Update Lagrangian multipliers

(iii) 
Update penalty parameters and inner problem stopping criteria

Step
(i) of
Algorithm 1, further referred as the
inner problem, is the most timeconsuming and thus the choice of the solver for
(9) is critical for the overall efficiency of the method. See
Section 11.4 below.
The inequality Lagrangian multipliers update in step
(ii) is motivated by the fact that if
${x}^{\ell +1}$,
${v}^{\ell +1}$ solve
(9) exactly in iteration
$\ell $, we obtain
Details can be found, for example, in
Stingl (2006).
In practise, numerical studies showed that it is not advantageous to do the full updates of multipliers
$u$,
$U$. Firstly, big changes in the multipliers may lead to a large number of iterations in subsequent solution of
(9) and, secondly, the multipliers might become illconditioned after a few steps and the algorithm suffers from numerical instabilities. To overcome these difficulties, a restricted update is performed instead.
New Lagrangian multipliers for (standard) inequalities
${u}_{\mathit{k}}^{\ell +1}$, for
$\mathit{k}=1,2,\dots ,{m}_{g}$ are limited not to violate the following bound
for a given
$0<{\mu}_{g}<1$ (see
U Update Restriction).
A similar strategy is applied to the matrix multipliers
${U}_{k}^{\ell +1}$ as well. For
$0<{\mu}_{A}<1$ (see
Umat Update Restriction) set
The penalty parameters
$p,P$ in step
(iii) are updated by some constant factor dependent on the initial penalty parameters
${p}^{0},{P}^{0}$ and
P Update Speed. The update process is stopped when
${p}_{\mathrm{min}}$ and
${P}_{\mathrm{min}}$ are reached (see
P Min,
Pmat Min).
Additional details about the multiplier and penalty update strategies, as well as local and global convergence properties under standard assumptions can be found in an extensive study
Stingl (2006).
11.2
Stopping Criteria
Algorithm 1 is stopped when all the stopping criteria are satisfied to the requested accuracy, these are:
and these based on Karush–Kuhn–Tucker (KKT) error measures, to keep the notation simple, formulation
(4) is assumed and iteration index
$\ell $ is dropped:
Here
${\epsilon}_{1}$,
${\epsilon}_{2}$,
${\epsilon}_{\mathrm{feas}}$ may be set in the option settings as
Stop Tolerance 1,
Stop Tolerance 2 and
Stop Tolerance Feasibility, respectively.
Note that if ${\mathbf{Task}}=\mathrm{FEASIBLEPOINT}$, only the feasibility is taken into account.
There is an option for linear SDP problems to switch from stopping criteria based on the KKT conditions to
DIMACS Measures, see
Mittelmann (2003). This is the default choice. To keep the notation readable, these are defined here only for the following simpler formulation of linear SDP rather than
(2):
In this case the algorithm stops when:
where
${\mathcal{A}}^{*}(\xb7)$ denote the adjoint operator to
$\mathcal{A}(\xb7)$,
${\left[{\mathcal{A}}^{*}\left(U\right)\right]}_{i}=\u2329{A}_{i},U\u232a$.
They can be viewed as a scaled version of the KKT conditions. ${\mathrm{Derr}}_{1}$ represents the (scaled) norm of the gradient of the Lagrangian, ${\mathrm{Derr}}_{2}$ and ${\mathrm{Derr}}_{4}$ the dual and primal infeasibility, respectively, and ${\mathrm{Derr}}_{5}$ and ${\mathrm{Derr}}_{6}$ measure the duality gap and the complementary slackness. Note that in this solver ${\mathrm{Derr}}_{2}=0$ by definition and ${\mathrm{Derr}}_{3}$ is automaticaly zero because the formulation involves slack variables which are not used here.
11.3
Choice of penalty functions ${\phi}_{g}$ and ${\phi}_{A}$
To treat the (standard) inequality constraints
${g}_{k}\left(x\right)\ge 0$, we use the penalty/barrier function proposed by
Ben–Tal and Zibulevsky (1997):
with default
$\stackrel{}{\tau}=\frac{1}{2}$.
The choice of
${\phi}_{A}$ (and thus of
${\Phi}_{P}$) is motivated by the complexity of the evaluation of
${\Phi}_{P}$ and its derivatives. If
${\phi}_{A}$ is defined as
it is possible to avoid the explicit eigenvalue decomposition in
(5) as it can be seen in the formulae below (note that index
$k$ is omitted):
where
For details follow
Kočvara and Stingl (2003). Note that, in particular, formula
(17) requires nontrivial computational resources even if careful handling of the sparsity of partial derivatives of
$\mathcal{A}\left(x\right)$ is implemented.
e04svf uses a set of strategies described in
Fujisawa et al. (1997) adapted for parallel computation.
11.4
Solution of the inner problem
This section describes solving of the inner problem (step
(i) of
Algorithm 1). We attempt to find an approximate solution of the following system (in
$x$ and
$v$) up to the given precision
$\alpha $:
where the penalty parameters
$p,P$, as well as the Lagrangian multipliers
$u$ and
$U$ are fixed.
A linesearch SQP framework is used due to its desirable convergence properties. It can be stated as follows.
 Algorithm 2 (Inner Loop)

Let ${x}^{0}$, ${v}^{0}$ be given (typically as the solution from the previous outer iteration), $p$, $P$, $u$, $U$ and $\alpha >0$ fixed. For $\ell =0,1,\dots $
(i) 
Find a descent direction $d$ by solving

(ii) 
Find a suitable step length $\delta $ and set

(iii) 
Stop if Inner Iteration Limit is reached or if

System
(20) is solved by the factorization routine MA97 (see
Hogg and Scott (2011), in combination with an inertia correction strategy described in
Stingl (2006). The step length selection is guided by
Linesearch Mode.
If there are no equality constraints in the problem, the unconstrained minimization in Step
(i) of
Algorithm 1 simplifies to the modified Newton method with linesearch (for details, see
Kočvara and Stingl (2003)). Alternatively, the equality constraints
${h}_{k}\left(x\right)=0$ can be converted to two inequalities which would be treated with the remaining constraints (see
Transform Constraints).
12
Optional Parameters
Several optional parameters in e04svf define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of e04svf these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling
e04zmf anytime between the initialization of the handle by
e04raf and the call to the solver. Modification of the arguments during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of
$\mathrm{AUTO}$), their value can be retrieved by
e04znf. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in
Section 12.1.
12.1
Description of the Optional Parameters
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
 the keywords, where the minimum abbreviation of each keyword is underlined;
 a parameter value,
where the letters $a$, $i$ and $r$ denote options that take character, integer and real values respectively;
 the default value, where the symbol $\epsilon $ is a generic notation for machine precision (see x02ajf).
All options accept the value $\mathrm{DEFAULT}$ to return single options to their default states.
Keywords and character values are case and white space insensitive.
This special keyword may be used to reset all optional parameters to their default values. Any argument value given with this keyword will be ignored.
DIMACS Measures  $a$  Default $=\mathrm{CHECK}$ 
If the problem is a linear semidefinite programming problem, this argument specifies if DIMACS error measures (see
Section 11.2) should be computed and/or checked. In other cases, this option reverts to
$\mathrm{NO}$ automatically.
Constraint: ${\mathbf{DIMACS\; Measures}}=\mathrm{COMPUTE}$, $\mathrm{CHECK}$ or $\mathrm{NO}$.
Hessian Density  $a$  Default $=\mathrm{AUTO}$ 
This optional parameter guides the solver on how the Hessian matrix of augmented Lagrangian $F\left(x,u,v,U,p,P\right)$ should be built. Option $\mathrm{AUTO}$ leaves the decision to the solver and it is the recommended option. Setting it to DENSE bypasses the autodetection and the Hessian is always built as a dense matrix. Option SPARSE instructs the solver to use a sparse storage and factorization of the matrix if possible.
Constraint: ${\mathbf{Hessian\; Density}}=\mathrm{AUTO}$, $\mathrm{DENSE}$ or $\mathrm{SPARSE}$
Infinite Bound Size  $r$  Default $\text{}={10}^{20}$ 
This defines the ‘infinite’ bound $\mathit{bigbnd}$ in the definition of the problem constraints. Any upper bound greater than or equal to $\mathit{bigbnd}$ will be regarded as $+\infty $ (and similarly any lower bound less than or equal to $\mathit{bigbnd}$ will be regarded as $\infty $). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
Constraint: ${\mathbf{Infinite\; Bound\; Size}}\ge 1000$.
Initial P  $a$  Default $=\mathrm{AUTOMATIC}$ 
This optional parameter defines the choice of the penalty optional parameters
${p}^{0}$,
${P}^{0}$, see
Algorithm 1.
 ${\mathbf{Initial\; P}}=\mathrm{AUTOMATIC}$
 The penalty optional parameters are chosen automatically as set by optional parameter Init Value P, Init Value Pmat and subject to automatic scaling. Note that ${P}^{0}$ might be increased so that the penalty function ${\Phi}_{P}\left(\right)$ is defined for all matrix constraints at the starting point.
 ${\mathbf{Initial\; P}}=\mathrm{KEEPPREVIOUS}$
 The penalty optional parameters are kept from the previous run of the solver if possible. If not, this options reverts to $\mathrm{AUTOMATIC}$. Note that even if the matrix penalty optional parameters are the same as in the previous run, they are still subject to a possible increase so that the penalty function ${\Phi}_{P}\left(\right)$ is well defined at the starting point.
Constraint: ${\mathbf{Initial\; P}}=\mathrm{AUTOMATIC}$ or $\mathrm{KEEPPREVIOUS}$.
Initial U  $a$  Default $=\mathrm{AUTOMATIC}$ 
This argument guides the solver on which initial Lagrangian multipliers are to be used.
 ${\mathbf{Initial\; U}}=\mathrm{AUTOMATIC}$
 The Lagrangian multipliers are chosen automatically as set by automatic scaling.
 ${\mathbf{Initial\; U}}=\mathrm{USER}$
 The values of arrays u and ua (if provided) are used as the initial Lagrangian multipliers subject to automatic adjustments. If one or the other array is not provided, the choice for missing data is as in $\mathrm{AUTOMATIC}$.
 ${\mathbf{Initial\; U}}=\mathrm{KEEPPREVIOUS}$
 The Lagrangian multipliers are kept from the previous run of the solver. If this option is set for the first run or optional parameters change the approach of the solver, the choice automatically reverts to $\mathrm{AUTOMATIC}$. This might be useful if the solver is hot started, for example, to achieve higher precision of the solution.
Constraint: ${\mathbf{Initial\; U}}=\mathrm{AUTOMATIC}$, $\mathrm{USER}$ or $\mathrm{KEEPPREVIOUS}$.
Initial X  $a$  Default $=\mathrm{USER}$ 
This argument guides which starting point
${x}^{0}$ is to be used.
 ${\mathbf{Initial\; X}}=\mathrm{AUTOMATIC}$
 The starting point is chosen automatically so that it satisfies simple bounds on the variables or as a zero vector. Input of argument x is ignored.
 ${\mathbf{Initial\; X}}=\mathrm{USER}$
 Initial values of argument x are used as a starting point.
Constraint: ${\mathbf{Initial\; X}}=\mathrm{AUTOMATIC}$ or $\mathrm{USER}$.
Init Value P  $r$  Default $=1.0$ 
This argument defines the value ${p}^{0}$, the initial penalty optional parameter for (standard) inequalities. A low value of the penalty causes the solution of the inner problem to be closer to the feasible region and thus to the desirable result. However, it also increases illconditioning of the system. It is not advisable to set the penalty too low unless a good starting point is provided.
Constraint: $\sqrt[4]{\epsilon}\le {\mathbf{Init\; Value\; P}}\le {10}^{4}$.
Init Value Pmat  $r$  Default $=1.0$ 
The value of this option suggests
${P}^{0}$, the initial penalty optional parameter for matrix inequalities. It is similar to
Init Value P (and the same advice applies), however,
${P}^{0}$ gets increased automatically if the matrix constraints are more infeasible than the actual penalty optional parameter.
Constraint: $\sqrt[4]{\epsilon}\le {\mathbf{Init\; Value\; Pmat}}\le {10}^{4}$.
Inner Iteration Limit  $i$  Default $=100$ 
The maximum number of the inner iterations (Newton steps) to be performed by
Algorithm 2 in each outer iteration. Setting the option too low might lead to
${\mathbf{ifail}}={\mathbf{23}}$. Values higher than
$100$ are unlikely to improve convergence.
Constraint: ${\mathbf{Inner\; Iteration\; Limit}}>0$.
Inner Stop Criteria  $a$  Default $=\mathrm{HEURISTIC}$ 
The precision
$\alpha $ for the solution of the inner subproblem is determined in
Algorithm 1 and under typical circumstances
Algorithm 2 is expected to reach this precision within the given
Inner Iteration Limit. If any problems are detected and
${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{HEURISTIC}$,
Algorithm 2 is allowed to stop before reaching the requested precision or the
Inner Iteration Limit. This usually saves many unfruitful iterations and the solver may recover in the following iterations. If you suspect that the heuristic problem detection is not suitable for your problem, setting
${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{STRICT}$ disallows such behaviour.
Constraint: ${\mathbf{Inner\; Stop\; Criteria}}=\mathrm{HEURISTIC}$ or $\mathrm{STRICT}$.
Inner Stop Tolerance  $r$  Default $={10}^{2}$ 
This option sets the required precision
${\alpha}^{0}$ for the first inner problem solved by
Algorithm 2. The precison of the solution of the inner problem does not need to be very high in the first outer iterations and it is automatically adjusted through the outer iterations to reach the optimality limit
${\epsilon}_{2}$ in the last one.
Setting
${\alpha}^{0}$ too restrictive (too low) causes an increase of the number of inner iterations needed in the first outer iterations and might lead to
${\mathbf{ifail}}={\mathbf{23}}$. In certain cases it might be helpful to use a more relaxed (higher)
${\alpha}^{0}$ and increase
P Update Speed which should reduce the number of inner iterations needed at the beginning of the computation in exchange for a possibly higher number of the outer iterations.
Constraint: $\epsilon <{\mathbf{Inner\; Stop\; Tolerance}}\le {10}^{3}$.
Linesearch Mode  $a$  Default $=\mathrm{AUTO}$ 
This controls the step size selection in
Algorithm 2. If
${\mathbf{Linesearch\; Mode}}=\mathrm{FULLSTEP}$ (the default for linear problems), unit steps are taken where possible and the step shortening takes place only to avoid undefined regions for the matrix penalty function
${\Phi}_{P}\left(\right)$ (see
(17)). This may be used for linear problems but it is not recommended for any nonlinear ones. If
${\mathbf{Linesearch\; Mode}}=\mathrm{ARMIJO}$, Armijo backtracking linesearch is used instead which is a fairly basic linesearch. If
${\mathbf{Linesearch\; Mode}}=\mathrm{GOLDSTEIN}$, a cubic safe guarded linesearch based on Goldstein condition is employed, this is the recommended (and default) choice for nonlinear problems.
Constraint: ${\mathbf{Linesearch\; Mode}}=\mathrm{AUTO}$, $\mathrm{FULLSTEP}$, $\mathrm{ARMIJO}$ or $\mathrm{GOLDSTEIN}$.
List  $a$  Default $=\mathrm{NO}$ 
This argument may be set to $\mathrm{YES}$ if you wish to turn on printing of each optional parameter specification as it is supplied.
Constraint: ${\mathbf{List}}=\mathrm{YES}$ or $\mathrm{NO}$
Monitor Frequency  $i$  Default $=0$ 
If
${\mathbf{Monitor\; Frequency}}>0$, the solver returns to you at the end of every
$i$th outer iteration. During these intermediate exits, the current point
x and Lagrangian multipliers
u,
ua (if requested) are provided as well as the statistics and error measures (
rinfo,
stats). Argument
inform helps to distinguish between intermediate and final exits and also allows immediate termination.
If ${\mathbf{Monitor\; Frequency}}=0$, the solver stops only once on the final point and no intermediate exits are made.
Constraint: ${\mathbf{Monitor\; Frequency}}\ge 0$.
Monitoring File  $i$  Default $=1$ 
If
$i\ge 0$, the
unit number
for the secondary (monitoring) output. If set to
$1$, no secondary output is provided. The following information is output to the unit:
 – a listing of the optional parameters;
 – problem statistics, the iteration log and the final status as set by Monitoring Level.
Constraint: ${\mathbf{Monitoring\; File}}\ge 1$.
Monitoring Level  $i$  Default $=4$ 
This argument sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with
Print Level.
Constraint: $0\le {\mathbf{Monitoring\; Level}}\le 5$.
Outer Iteration Limit  $i$  Default $\text{}=100$ 
The maximum number of the outer iterations to be performed by
Algorithm 1. If
${\mathbf{Outer\; Iteration\; Limit}}=0$, no iteration is performed, only quantities needed in the stopping criteria are computed and returned in
rinfo. This might be useful in connection with
${\mathbf{Initial\; X}}=\mathrm{USER}$ and
${\mathbf{Initial\; U}}=\mathrm{USER}$ to check optimality of the given point. However, note that the rules for possible modifications of the starting point still apply, see
u and
ua. Setting the option too low might lead to
${\mathbf{ifail}}={\mathbf{22}}$.
Constraint: ${\mathbf{Outer\; Iteration\; Limit}}\ge 0$.
P Min  $r$  Default $=\sqrt{\epsilon}$ 
This controls ${p}_{\mathrm{min}}$, the lowest possible penalty value $p$ used for (standard) inequalities. In general, very small values of the penalty optional parameters cause illconditioning which might lead to numerical difficulties. On the other hand, very high ${p}_{\mathrm{min}}$ prevents the algorithm from reaching the requested accuracy on the feasibility. Under normal circumstances, the default value is recommended.
Constraint: $\epsilon \le {\mathbf{P\; Min}}\le {10}^{2}$.
Pmat Min  $r$  Default $=\sqrt{\epsilon}$ 
This is an equivalent of
P Min for the minimal matrix penalty optional parameter
${P}_{\mathrm{min}}$. The same advice applies.
Constraint: $\epsilon \le {\mathbf{Pmat\; Min}}\le {10}^{2}$.
Preference  $a$  Default $=\mathrm{SPEED}$ 
This option affects how contributions from the matrix constraints
(17) to the system Hessian matrix are computed. The default option of
${\mathbf{Preference}}=\mathrm{SPEED}$ should be suitable in most cases. However, dealing with matrix constraints of a very high dimension may cause noticable memory overhead and switching to
${\mathbf{Preference}}=\mathrm{MEMORY}$ may be required.
Constraint: ${\mathbf{Preference}}=\mathrm{SPEED}$ or $\mathrm{MEMORY}$.
Presolve Block Detect  $a$  Default $=\mathrm{YES}$ 
If ${\mathbf{Presolve\; Block\; Detect}}=\mathrm{YES}$, the matrix constraints are checked during preprocessoring to determine if they can be split into smaller independent ones, thus speeding up the solver.
Constraint: ${\mathbf{Presolve\; Block\; Detect}}=\mathrm{YES}$ or $\mathrm{NO}$.
Print File  $i$  Default
$=\mathrm{advisory\; message\; unit\; number}$ 
If
$i\ge 0$, the
unit number
for the primary output of the solver. If
${\mathbf{Print\; File}}=1$, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by
x04abf at the time of the optional parameters initialization, e.g., at the initialization of the handle. The following information is output to the unit:
 – a listing of optional parameters if set by Print Options;
 – problem statistics, the iteration log and the final status from the solver as set by Print Level.
Constraint: ${\mathbf{Print\; File}}\ge 1$.
Print Level  $i$  Default $=2$ 
This argument defines how detailed information should be printed by the solver to the primary output.
$i$ 
Output 
$0$ 
No output from the solver 
$1$ 
Only the final status and the objective value 
$2$ 
Problem statistics, one line per outer iteration showing the progress of the solution, final status and statistics 
$3$ 
As level 2 but detailed output of the outer iterations is provided and brief overview of the inner iterations 
$4$, $5$ 
As level 3 but details of the inner iterations are printed as well 
Constraint: $0\le {\mathbf{Print\; Level}}\le 5$.
Print Options  $a$  Default $=\mathrm{YES}$ 
If ${\mathbf{Print\; Options}}=\mathrm{YES}$, a listing of optional parameters will be printed to the primary output.
Constraint: ${\mathbf{Print\; Options}}=\mathrm{YES}$ or $\mathrm{NO}$.
P Update Speed  $i$  Default $=12$ 
This option affects the rate at which the penalty optional parameters
$p,P$ are updated (
Algorithm 1, step
(iii)) and thus indirectly influences the overall number of outer iterations. Its value can be interpretted as the typical number of outer iterations needed to get from the initial penalty values
${p}^{0}$,
${P}^{0}$ halfway to the
${p}_{\mathrm{min}}$ and
${P}_{\mathrm{min}}$. Values smaller than
$3$ causes a very agressive penalty update strategy which might lead to the increased number of inner iterations and possibly to numerical difficulties. On the other hand, values higher than
$15$ produce a relatively conservative approach which leads to a higher number of the outer iterations.
If the solver encounters difficulties on your problem, a higher value might help. If your problem is working fine, setting a lower value might increase the speed.
Constraint: $1\le {\mathbf{P\; Update\; Speed}}\le 100$.
Stats Time  $a$  Default $=\mathrm{NO}$ 
This argument turns on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches. It is possible to choose between CPU and wall clock time. Choice $\mathrm{YES}$ is equivalent to wall clock.
Constraint: ${\mathbf{Stats\; Time}}=\mathrm{YES}$, $\mathrm{NO}$, $\mathrm{CPU}$ or $\mathrm{WALL\; CLOCK}$.
Stop Criteria  $a$  Default $=\mathrm{SOFT}$ 
If ${\mathbf{Stop\; Criteria}}=\mathrm{SOFT}$, the solver is allowed to stop prematurely with a suboptimal solution, ${\mathbf{ifail}}={\mathbf{50}}$, if it predicts that a better estimate of the solution cannot be reached. This is the recommended option.
Constraint: ${\mathbf{Stop\; Criteria}}=\mathrm{SOFT}$ or $\mathrm{STRICT}$.
Stop Tolerance 1  $r$  Default $=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({10}^{6},\sqrt{\epsilon}\right)$ 
This option defines
${\epsilon}_{1}$ used as a tolerance for the relative duality gap
(10) and the relative precision
(11), see
Section 11.2.
Constraint: ${\mathbf{Stop\; Tolerance\; 1}}>\epsilon $.
Stop Tolerance 2  $r$  Default $=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({10}^{7},\sqrt{\epsilon}\right)$ 
This option sets the value
${\epsilon}_{2}$ which is used for optimality
(12) and complementarity
(14) tests from KKT conditions or if
${\mathbf{DIMACS\; Measures}}=\mathrm{Check}$ for all DIMACS error measures instead. See
Section 11.2.
Constraint: ${\mathbf{Stop\; Tolerance\; 2}}>\epsilon $.
Stop Tolerance Feasibility  $r$  Default $=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({10}^{7},\sqrt{\epsilon}\right)$ 
This argument places an acceptance limit on the feasibility of the solution
(13),
${\epsilon}_{\mathrm{feas}}$. See
Section 11.2.
Constraint: ${\mathbf{Stop\; Tolerance\; Feasibility}}>\epsilon $.
Task  $a$  Default $=\mathrm{MINIMIZE}$ 
This argument specifies the required direction of the optimization. If
${\mathbf{Task}}=\mathrm{FEASIBLEPOINT}$, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function was set,
Task reverts to
$\mathrm{FEASIBLEPOINT}$ automatically.
Constraint: ${\mathbf{Task}}=\mathrm{MINIMIZE}$, $\mathrm{MAXIMIZE}$ or $\mathrm{FEASIBLE\; POINT}$.
Transform Constraints  $a$  Default $=\mathrm{AUTO}$ 
This argument controls how equality constraints are treated by the solver. If
${\mathbf{Transform\; Constraints}}=\mathrm{EQUALITIES}$, all equality constraints
${h}_{k}\left(x\right)=0$ from
(4) are treated as two inequalities
${h}_{k}\left(x\right)\le 0$ and
${h}_{k}\left(x\right)\ge 0$, see
Section 11.4. This is the default and the only option in this release for equality constrained problems.
Constraint: ${\mathbf{Transform\; Constraints}}=\mathrm{AUTO}$, $\mathrm{NO}$ or $\mathrm{EQUALITIES}$.
U Update Restriction  $r$  Default $=0.5$ 
This defines the value ${\mu}_{g}$ giving the bounds on the updates of Lagrangian multipliers for (standard) inequalities between the outer iterations. Values close to $1$ limit the changes of the multipliers and serve as a kind of smoothing, lower values allow more significant changes.
Based on numerical experience, big variation in the multipliers may lead to a large number of iterations in the subsequent step and might disturb the convergence due to illconditioning.
It might be worth experimenting with the value on your particular problem. Mid range values are recommended over the more extremal ones.
Constraint: $\epsilon <{\mathbf{U\; Update\; Restriction}}<1$.
Umat Update Restriction  $r$  Default $=0.3$ 
This is an equivalent of
U Update Restriction for matrix constraints, denoted as
${\mu}_{A}$ in
Section 11.1. The advice above applies equally.
Constraint: $\epsilon <{\mathbf{Umat\; Update\; Restriction}}<1$.