# NAG FL InterfaceE04 (Opt)Minimizing or Maximizing a Function

Settings help

FL Name Style:

FL Specification Language:

## 1Scope of the Chapter

This chapter provides routines for solving various mathematical optimization problems by solvers based on local stopping criteria. The main classes of problems covered in this chapter are:
• Linear Programming (LP) – dense and sparse;
• Quadratic Programming (QP) – convex and nonconvex, dense and sparse;
• Nonlinear Programming (NLP) – dense and sparse, based on active-set SQP methods or interior point methods (IPM) ;
• Second-order Cone Programming (SOCP) ;
• Semidefinite Programming (SDP) – both linear matrix inequalities (LMI) and bilinear matrix inequalities (BMI) ;
• Derivative-free Optimization (DFO) ;
• Least Squares (LSQ) , data fitting – linear and nonlinear, constrained and unconstrained.
For a full overview of the functionality offered in this chapter, see Section 5 or the Chapter Contents (Chapter E04).
• Chapter E05 contains routines to solve global optimization problems;
• Chapter H addresses problems arising in operational research and focuses on Mixed Integer Programming (MIP) ;
• Chapters F07 and F08 include routines for linear algebra and in particular unconstrained linear least squares;
• Chapter E02 focuses on curve and surface fitting, in which linear data fitting in ${l}_{1}$ or ${l}_{\infty }$ norm might be of interest;
• Chapter G02 offers several regression (data fitting) routines, including linear, nonlinear and quantile regression, LARS, LASSO and others.
This introduction is only a brief guide to the subject of optimization. It discusses a classification of the optimization problems and presents an overview of the algorithms and their stopping criteria to help with the choice of a correct solver for a particular problem. Anyone with a difficult or protracted problem to solve will find it beneficial to consult a more detailed text, see Gill et al. (1981), Fletcher (1987) or Nocedal and Wright (2006). If you are unfamiliar with the mathematics of the subject you may find Sections 2.1, 2.2, 2.3, 2.6 and 3 a useful starting point.

## 2Background to the Problems

### 2.1Introduction to Mathematical Optimization

Mathematical Optimization, also known as Mathematical Programming, refers to the problem of finding values of the inputs from a given set so that a function (called the objective function) is minimized or maximized. The inputs are called decision variables, primal variables or just variables. The given set from which the decision variables are selected is referred to as a feasible set and might be defined as a domain where constraints expressed as functions of the decision variables hold certain values. Each point of the feasible set is called a feasible point.
A general mathematical formulation of such a problem might be written as
 $minimize f(x) subject to x∈F,$
where $x$ denotes the decision variables, $f\left(x\right)$ the objective function and $\mathcal{F}$ the feasibility set. In this chapter we assume that $\mathcal{F}\subset {ℝ}^{n}$. Since maximization of the objective function $f\left(x\right)$ is equivalent to minimizing $-f\left(x\right)$, only minimization is considered further in the text. Some routines allow you to specify whether you are solving a minimization or maximization problem, carrying out the required transformation of the objective function in the latter case.
A point ${x}^{*}$ is said to be a local minimum of a function $f$ if it is feasible (${x}^{*}\in \mathcal{F}$) and if $f\left(x\right)\ge f\left({x}^{*}\right)$ for all $x\in \mathcal{F}$ near ${x}^{*}$. A point ${x}^{*}$ is a global minimum if it is a local minimum and $f\left(x\right)\ge f\left({x}^{*}\right)$ for all feasible $x$. The solvers in this chapter are based on algorithms which seek only a local minimum, however, many problems (such as convex optimization problems) have only one local minimum. This is also the global minimum. In such cases the Chapter E04 solvers find the global minimum. See Chapter E05 for solvers which try to find a global solution even for nonconvex functions.

### 2.2Classification of Optimization Problems

There is no single efficient solver for all optimization problems. Therefore, it is important to choose a solver which matches the problem and any specific needs as closely as possible. A more generic solver might be applied, however the performance suffers in some cases, depending on the underlying algorithm.
There are various criteria to help to classify optimization problems into particular categories. The main criteria are as follows:
• Type of objective function;
• Type of constraints;
• Size of the problem;
• Smoothness of the data and available derivative information.
Each of the criteria is discussed below to give the necessary information to identify the class of the optimization problem. Section 2.5 presents the basic properties of the algorithms and Section 3 advises on the choice of particular routines in the chapter.

#### 2.2.1Types of objective functions

In general, if there is a structure in the problem the solver should benefit from it. For example, a solver for problems with the sum of squares objective should work better than when this objective is treated as a general nonlinear objective. Therefore, it is important to recognize typical types of the objective functions.
An optimization problem which has no objective is equivalent to having a constant objective, i.e., $f\left(x\right)=0$. It is usually called a feasible point problem. The task is to then find any point which satisfies the constraints.
A linear objective function is a function which is linear in all variables and, therefore, can be represented as
 $f(x)= cTx+c0$
where $c\in {ℝ}^{n}$. Scalar ${c}_{0}$ has no influence on the choice of decision variables $x$ and is usually omitted. It will not be used further in this text.
A quadratic objective function is an extension of a linear function with quadratic terms as follows:
 $f(x)= 12 xTHx+ cTx .$
Here $H$ is a real symmetric $n×n$ matrix. In addition, if $H$ is positive semidefinite (all its eigenvalues are non-negative), the objective is convex. In convex case the quadratic term might also be defined in a factorized form as follows:
 $f(x)= 12 xTFTFx+ cTx$
where $F$ can be viewed as a factor of $H={F}^{\mathrm{T}}F$. For instance, the objective function in a linear least squares problem ${‖Fx-y‖}_{2}^{2}$ falls into this class as its quadratic term is ${x}^{\mathrm{T}}{F}^{\mathrm{T}}Fx$ and $c=-2{y}^{\mathrm{T}}F$.
A general nonlinear objective function is any $f:{ℝ}^{n}\to ℝ$ without a special structure.
Special consideration is given to the objective function in the form of a sum of squares of functions, such as
 $f(x)= ∑i=1m ri2(x)$
where ${r}_{i}:{ℝ}^{n}\to ℝ$; often called residual functions. This form of the objective plays a key role in data fitting solved as a least squares problem as shown in Section 2.2.3.

#### 2.2.2Types of constraints

Not all optimization problems have to have constraints. If there are no restrictions on the choice of $x$ except that $x\in \mathcal{F}={ℝ}^{n}$, the problem is called unconstrained and thus every point is a feasible point.
Simple bounds on decision variables $x\in {ℝ}^{n}$ (also known as box constraints or bound constraints) restrict the value of the variables, e.g., ${x}_{5}\le 10$. They might be written in a general form as
 $lxi ≤ xi ≤ uxi , i=1,…,n$
or in the vector notation as
 $lx ≤x≤ ux$
where ${l}_{x}$ and ${u}_{x}$ are $n$-dimensional vectors. Note that lower and upper bounds are specified for all the variables. By conceptually allowing ${l}_{{x}_{i}}=-\infty$ and ${u}_{{x}_{i}}=+\infty$ or ${l}_{{x}_{i}}={u}_{{x}_{i}}$ full generality in various types of constraints is allowed, such as unconstrained variables, one-sided inequalities, ranges or equalities (fixing the variable).
The same format of bounds is adopted to linear and nonlinear constraints in the whole chapter. Note that for the purpose of passing infinite bounds to the routines, all values above a certain threshold (typically ${10}^{20}$) are treated as $+\infty$.
Linear constraints are defined as constraint functions that are linear in all of their variables, e.g., $3{x}_{1}+2{x}_{2}\ge 4$. They can be stated in a matrix form as
 $lB ≤Bx≤ uB$
where $B$ is a general ${m}_{B}×n$ rectangular matrix and ${l}_{B}$ and ${u}_{B}$ are ${m}_{B}$-dimensional vectors. Each row of $B$ represents linear coefficients of one linear constraint. The same rules for bounds apply as in the simple bounds case.
Although the bounds on ${x}_{i}$ could be included in the definition of linear constraints, we recommend you distinguish between them for reasons of computational efficiency as most of the solvers treat simple bounds explicitly.
Quadratic constraints are defined as quadratic functions of a set of variables in a standard form as
 $12 xTQx + rTx + s≤0$
where $Q$ is a symmetric $n×n$ matrix, $r$ is an $n$-dimensional vector and $s$ is a scalar. If $Q$ is positive semidefinite, the constraint is convex. In convex case a quadratic constraint may also be defined in its factorized form, similarly to the quadratic objective function, as
 $12 xTFTFx + rTx + s≤0$
where $F$ is a rectangular matrix which can be viewed as a factor of $Q={F}^{\mathrm{T}}F$.
A set of ${m}_{g}$ nonlinear constraints may be defined in terms of a nonlinear function $g:{ℝ}^{n}\to {ℝ}^{{m}_{g}}$ and the bounds ${l}_{g}$ and ${u}_{g}$ which follow the same format as simple bounds and linear constraints:
 $lg≤g(x)≤ug .$
Although the linear constraints could be included in the definition of nonlinear constraints, again we prefer to distinguish between them for reasons of computational efficiency.
There are two commonly used second-order cones (also known as quadratic, Lorentz or ice cream cones): a quadratic cone and a rotated quadratic cone. They are defined by the following inequalities:
 $K q mi ≔ {z=(z1,z2,…,zmi)∈ℝmi : z12≥ ∑ j=2 mi zj2, z1≥0} .$ (1)
 $K r mi ≔ {z=(z1,z2,…,zmi)∈ℝmi : 2z1z2≥ ∑ j=3 mi zj2, z1≥0, z2≥0} .$ (2)
Here $z$ denotes a subset of the decision variables $x$. Such cones do not necessarily appear naturally in the model formulations so a reformulation is often needed. For example, all convex quadratic constraints or many types of norm minimization problems can be written as quadratic cones, see Section 9.1 in e04ptf.
A matrix constraint (or matrix inequality) is a constraint on eigenvalues of a matrix operator. More precisely, let ${𝕊}^{m}$ denote the space of real symmetric matrices $m×m$ and let $\mathcal{A}$ be a matrix operator $\mathcal{A}:{ℝ}^{n}\to {𝕊}^{m}$, i.e., it assigns a symmetric matrix $\mathcal{A}\left(x\right)$ for each $x$. The matrix constraint can be expressed as
 $A(x)⪰0$
where the inequality $S⪰0$ for $S\in {𝕊}^{m}$ is meant in the eigenvalue sense, namely all eigenvalues of the matrix $S$ should be non-negative (the matrix should be positive semidefinite).
There are two types of matrix constraints allowed in the current mark of the Library. The first is linear matrix inequality (LMI) formulated as
 $A(x)= ∑ i=1 n xi Ai - A0 ⪰ 0$
and the second one, bilinear matrix inequality (BMI), stated as
 $A(x)= ∑ i,j=1 n xi xj Q ij + ∑ i=1 n xi Ai - A0 ⪰ 0 .$
Here all matrices ${A}_{i}$, ${Q}_{ij}$ are given real symmetric matrices of the same dimension. Note that the latter type is in fact quadratic in $x$, nevertheless, it is referred to as bilinear for historical reasons.

#### 2.2.3Typical classes of optimization problems

Specific combinations of the types of the objective functions and constraints give rise to various classes of optimization problems. The common ones are presented below. It is always advisable to consider the closest formulation which covers your problem when choosing the solver. For more information see classical texts such as Dantzig (1963), Gill et al. (1981), Fletcher (1987), Nocedal and Wright (2006) or Chvátal (1983).
A Linear Programming (LP) problem is a problem with a linear objective function, linear constraints and simple bounds. It can be written as follows:
 $minimize x∈ℝn cTx subject to lB≤Bx≤uB, lx≤x≤ux.$
Quadratic Programming (QP) problems optimize a quadratic objective function over a set given by linear constraints and simple bounds. Depending on the convexity of the objective function, we can distinguish between convex and nonconvex (or general) QP.
 $minimize x∈ℝn 12 xTHx + cTx subject to lB≤Bx≤uB, lx≤x≤ux.$
Quadratically Constrained Quadratic Programming (QCQP) problems extend quadratic programming problems with a set of quadratic constraints. Depending on the convexity of the objective function and quadratic constraints, we can distinguish between convex and nonconvex (or general) QCQP.
 $minimize x∈ℝn 12 xTHx + cTx subject to 12 xTQkx + rkTx + sk≤0 , k=1,…,mQ , lB≤Bx≤uB, lx≤x≤ux.$
Nonlinear Programming (NLP) problems allow a general nonlinear objective function $f\left(x\right)$ and any of the nonlinear, quadratic, linear or bound constraints. Special cases, when some (or all) of the constraints are missing, are termed as unconstrained, bound-constrained or linearly-constrained nonlinear programming and might have a specific solver as some algorithms take special provision for each of the constraint type. Problems with a linear or quadratic objective and nonlinear constraints should still be solved as general NLPs.
 $minimize x∈ℝn f(x) subject to lg≤g(x)≤ug, lB≤Bx≤uB, lx≤x≤ux .$
Second-order Cone Programming (SOCP) problems are composed of a linear objective function, linear constraints, simple bounds and one or more quadratic cones. The SOCP problem may be written as
 $minimize x∈ℝn cTx subject to lA ≤ Ax ≤ uA , lx ≤ x ≤ ux , x∈K ,$
where $\mathcal{K}={\mathcal{K}}^{{n}_{1}}×\cdots ×{\mathcal{K}}^{{n}_{r}}×{ℝ}^{{n}_{l}}$ is a Cartesian product of $r$ quadratic or rotated quadratic cones (as defined in Section 2.2.2) and ${n}_{l}$-dimensional real space. Note that the cones in a formulation may overlap (i.e., one decision variable may be involved in more than one quadratic cone). SOCP is a very powerful model for many convex problems, however, typically it is necessary to reformulate the model to obtain the form above. Convex QCQP problems are reformulated automatically by the solver, for others see Section 9.1 in e04ptf, Alizadeh and Goldfarb (2003) and Lobo et al. (1998).
Semidefinite Programming (SDP) typically refers to linear semidefinite programming thus a problem with a linear objective function, linear constraints and linear matrix inequalities:
 $minimize x∈ℝn cTx subject to ∑ i=1 n xi Aik - A0k ⪰ 0 , k=1,…,mA , lB≤Bx≤uB, lx≤x≤ux.$
This problem can be extended with a quadratic objective and bilinear (in fact quadratic) matrix inequalities. We refer to it as a semidefinite programming problem with bilinear matrix inequalities (BMI-SDP):
 $minimize x∈ℝn 12 xTHx + cTx subject to ∑ i,j=1 n xi xj Qijk + ∑ i=1 n xi Aik - A0k ⪰ 0 , k=1,…,mA , lB≤Bx≤uB, lx≤x≤ux.$
A Least Squares (LSQ) problem is a problem where the objective function in the form of sum of squares is minimized subject to usual constraints. If the residual functions ${r}_{i}\left(x\right)$ are linear or nonlinear, the problem is known as linear or nonlinear least squares, respectively. Not all types of the constraints need to be present which brings up special cases of unconstrained, bound-constrained or linearly-constrained least squares problems as in NLP .
 $minimize x∈ℝn ∑i=1mri2(x) subject to lg≤g(x)≤ug, lB≤Bx≤uB, lx≤x≤ux.$
This form of the problem is very common in data fitting as demonstrated on the following example. Let us consider a process that is observed at times ${t}_{i}$ and measured with results ${y}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,m$. Furthermore, the process is assumed to behave according to a model $\varphi \left(t;x\right)$ where $x$ are parameters of the model. Given the fact that the measurements might be inaccurate and the process might not exactly follow the model, it is beneficial to find model parameters $x$ so that the error of the fit of the model to the measurements is minimized. This can be formulated as an optimization problem in which $x$ are decision variables and the objective function is the sum of squared errors of the fit at each individual measurement, thus:
 $minimize x∈ℝn ∑i=1mri2(x) where ri(x) = ϕ(ti;x) -yi.$

#### 2.2.4Problem size, dense and sparse problems

The size of the optimization problem plays an important role in the choice of the solver. The size is usually understood to be the number of variables $n$ and the number (and the type) of the constraints. Depending on the size of the problem we talk about small-scale, medium-scale or large-scale problems.
It is often more practical to look at the data and its structure rather than just the size of the problem. Typically, in a large-scale problem not all variables interact with everything else. It is natural that only a small portion of the constraints (if any) involves all variables and the majority of the constraints depends only on small different subsets of the variables. This creates many explicit zeros in the data representation which it is beneficial to capture and pass to the solver. In such a case the problem is referred to as sparse. The data representation usually has the form of a sparse matrix which defines the linear constraint matrix $B$, Jacobian matrix of the nonlinear constraints ${g}_{i}$ or the Hessian of the objective $H$. Common sparse matrix formats are used, such as coordinate storage (CS) and compressed column storage (CCS) (see Section 2.1 in the F11 Chapter Introduction).
The counterpart to a sparse problem is a dense problem in which the matrices are stored in general full format and no structure is assumed or exploited. Whereas passing a dense problem to a sparse solver presents typically only a small overhead, calling a dense solver on a large-scale sparse problem is ill-advised; it leads to a significant performance degradation and memory overuse.

#### 2.2.5Derivative information, smoothness, noise and Derivative-free Optimization (DFO)

Most of the classical optimization algorithms rely heavily on derivative information. It plays a key role in necessary and sufficient conditions (see Section 2.4) and in the computation of the search direction at each iteration (see Section 2.5). Therefore, it is important that accurate derivatives of the nonlinear objective and nonlinear constraints are provided whenever possible.
Unless stated otherwise, it is assumed that the nonlinear functions are sufficiently smooth. The solvers will usually solve optimization problems even if there are isolated discontinuities away from the solution, however you should always consider whether an alternative smooth representation of the problem exists. A typical example is an absolute value $|{x}_{i}|$ which does not have a first derivative for ${x}_{i}=0$. Nevertheless, if the model allows it can be transformed as
 $xi= xi+- xi- , |xi|= xi++ xi- , where xi+ , ​ xi- ≥0$
which avoids the discontinuity of the first derivative. If many discontinuities are present, alternative methods need to be applied such as e04cbf or stochastic algorithms in Chapter E05, e05saf or e05sbf.
The vector of first partial derivatives of a function is called the gradient vector, i.e.,
 $∇f(x) = [ ∂f(x) ∂x1 , ∂f(x) ∂x2 ,…, ∂f(x) ∂xn ] T ,$
the matrix of second partial derivatives is termed the Hessian matrix, i.e.,
 $∇2 f(x) = [ ∂2f(x) ∂xi∂xj ] i,j=1,…,n$
and the matrix of first partial derivatives of the vector-valued function $g:{ℝ}^{n}\to {ℝ}^{m}$ is known as the Jacobian matrix:
 $J(x) = [ ∂gi(x) ∂xj ] i=1,…,m,j=1,…,n .$
If the function is smooth and the derivative is unavailable, it is possible to approximate it by finite differences, a change in function values in response to small perturbations of the variables. Many routines in the Library estimate missing elements of the gradients automatically this way. The choice of the size of the perturbations strongly affects the quality of the approximation. Too small perturbations might spoil the approximation due to the cancellation errors in floating-point arithmetic and too big reduce the match of the finite differences and the derivative (see e04xaf/​e04xaa for optimal balance of the factors). In addition, finite differences are very sensitive to the accuracy of $f\left(x\right)$. They might be unreliable or fail completely if the function evaluation is inaccurate or noisy such as when $f\left(x\right)$ is a result of a stochastic simulation or an approximate solution of a PDE.
Derivative-free Optimization (DFO) represents an alternative to derivative-based optimization algorithms. DFO solvers neither rely on derivative information nor approximate it by finite differences. They sample function evaluations across the domain to determine a new iteration point (for example, by a quadratic model through the sampled points). They are, therefore, less exposed to the relative error of the noise of the function because the sample points are never too close to each other to take the error into account. DFO might be useful even if the finite differences can be computed as the number of function evaluations is lower. This is particularly beneficial for problems where the evaluations of $f$ are expensive. DFO solvers tend to exhibit a faster initial progress to the solution, however, they typically cannot achieve high-accurate solutions.

#### 2.2.6Minimization subject to bounds on the objective function

In all of the above problem categories it is assumed that
 $a≤f(x)≤b$
where $a=-\infty$ and $b=+\infty$. Problems in which $a$ and/or $b$ are finite can be solved by adding an extra constraint of the appropriate type (i.e., linear or nonlinear) depending on the form of $f\left(x\right)$. Further advice is given in Section 3.7.

#### 2.2.7Multi-objective optimization

Sometimes a problem may have two or more objective functions which are to be optimized at the same time. Such problems are called multi-objective, multi-criteria or multi-attribute optimization. If the constraints are linear and the objectives are all linear then the terminology goal programming is also used.
Although there is no routine dealing with this type of problems explicitly in this mark of the Library, techniques used in this chapter and in Chapter E05 may be employed to address such problems, see Section 2.5.5.

### 2.3Geometric Representation

To illustrate the nature of optimization problems it is useful to consider the following example:
 $f(x) = ex1 (4x12+2x22+4x1x2+2x2+1) .$
(This function is used as the example function in the documentation for the unconstrained routines.) Figure 1
Figure 1 is a contour diagram of $f\left(x\right)$. The contours labelled ${F}_{0},{F}_{1},\dots ,{F}_{4}$ are isovalue contours, or lines along which the function $f\left(x\right)$ takes specific constant values. The point ${x}^{*}={\left(\frac{1}{2},-1\right)}^{\mathrm{T}}$ is a local unconstrained minimum, that is, the value of $f\left({x}^{*}\right)$ ($\text{}=0$) is less than at all the neighbouring points. A function may have several such minima. The point ${x}_{s}$ is said to be a saddle point because it is a minimum along the line AB, but a maximum along CD.
If we add the constraint ${x}_{1}\ge 0$ (a simple bound) to the problem of minimizing $f\left(x\right)$, the solution remains unaltered. In Figure 1 this constraint is represented by the straight line passing through ${x}_{1}=0$, and the shading on the line indicates the unacceptable region (i.e., ${x}_{1}<0$).
If we add the nonlinear constraint ${g}_{1}\left(x\right):{x}_{1}+{x}_{2}-{x}_{1}{x}_{2}-\frac{3}{2}\ge 0$, represented by the curved shaded line in Figure 1, then ${x}^{*}$ is not a feasible point because ${g}_{1}\left({x}^{*}\right)<0$. The solution of the new constrained problem is ${x}_{b}\simeq {\left(1.1825,-1.7397\right)}^{\mathrm{T}}$, the feasible point with the smallest function value (where $f\left({x}_{b}\right)\simeq 3.0607$).

### 2.4Sufficient Conditions for a Solution

All nonlinear functions will be assumed to have continuous second derivatives in the neighbourhood of the solution.

#### 2.4.1Unconstrained minimization

The following conditions are sufficient for the point ${x}^{*}$ to be an unconstrained local minimum of $f\left(x\right)$:
1. (i)$‖\nabla f\left({x}^{*}\right)‖=0$ and
2. (ii)${\nabla }^{2}f\left({x}^{*}\right)$ is positive definite,
where $‖·‖$ denotes the Euclidean norm.

#### 2.4.2Minimization subject to bounds on the variables

At the solution of a bounds-constrained problem, variables which are not on their bounds are termed free variables. If it is known in advance which variables are on their bounds at the solution, the problem can be solved as an unconstrained problem in just the free variables; thus, the sufficient conditions for a solution are similar to those for the unconstrained case, applied only to the free variables.
Sufficient conditions for a feasible point ${x}^{*}$ to be the solution of a bounds-constrained problem are as follows:
1. (i)$‖\overline{g}\left({x}^{*}\right)‖=0$; and
2. (ii)$\overline{G}\left({x}^{*}\right)$ is positive definite; and
3. (iii)$\frac{\partial }{\partial {x}_{j}}f\left({x}^{*}\right)<0,{x}_{j}={u}_{j}$; $\frac{\partial }{\partial {x}_{j}}f\left({x}^{*}\right)>0,{x}_{j}={l}_{j}$,
where $\overline{g}\left(x\right)$ is the gradient of $f\left(x\right)$ with respect to the free variables, and $\overline{G}\left(x\right)$ is the Hessian matrix of $f\left(x\right)$ with respect to the free variables. The extra condition (iii) ensures that $f\left(x\right)$ cannot be reduced by moving off one or more of the bounds.

#### 2.4.3Linearly-constrained minimization

For the sake of simplicity, the following description does not include a specific treatment of bounds or range constraints, since the results for general linear inequality constraints can be applied directly to these cases.
At a solution ${x}^{*}$, of a linearly-constrained problem, the constraints which hold as equalities are called the active or binding constraints. Assume that there are $t$ active constraints at the solution ${x}^{*}$, and let $\stackrel{^}{A}$ denote the matrix whose columns are the columns of $A$ corresponding to the active constraints, with $\stackrel{^}{b}$ the vector similarly obtained from $b$; then
 $A^Tx*=b^.$
The matrix $Z$ is defined as an $n×\left(n-t\right)$ matrix satisfying:
 $A^TZ=0; ZTZ=I.$
The columns of $Z$ form an orthogonal basis for the set of vectors orthogonal to the columns of $\stackrel{^}{A}$.
Define
• ${g}_{Z}\left(x\right)={Z}^{\mathrm{T}}\nabla f\left(x\right)$, the projected gradient vector of $f\left(x\right)$;
• ${G}_{Z}\left(x\right)={Z}^{\mathrm{T}}{\nabla }^{2}f\left(x\right)Z$, the projected Hessian matrix of $f\left(x\right)$.
At the solution of a linearly-constrained problem, the projected gradient vector must be zero, which implies that the gradient vector $\nabla f\left({x}^{*}\right)$ can be written as a linear combination of the columns of $\stackrel{^}{A}$, i.e., $\nabla f\left({x}^{*}\right)=\sum _{i=1}^{t}{\lambda }_{i}^{*}{\stackrel{^}{a}}_{i}=\stackrel{^}{A}{\lambda }^{*}$. The scalar ${\lambda }_{i}^{*}$ is defined as the Lagrange multiplier corresponding to the $i$th active constraint. A simple interpretation of the $i$th Lagrange multiplier is that it gives the gradient of $f\left(x\right)$ along the $i$th active constraint normal; a convenient definition of the Lagrange multiplier vector (although not a recommended method for computation) is:
 $λ*=(A^TA^)-1A^T∇f(x*).$
Sufficient conditions for ${x}^{*}$ to be the solution of a linearly-constrained problem are:
1. (i)${x}^{*}$ is feasible, and ${\stackrel{^}{A}}^{\mathrm{T}}{x}^{*}=\stackrel{^}{b}$; and
2. (ii)$‖{g}_{Z}\left({x}^{*}\right)‖=0$, or equivalently, $\nabla f\left({x}^{*}\right)=\stackrel{^}{A}{\lambda }^{*}$; and
3. (iii)${G}_{Z}\left({x}^{*}\right)$ is positive definite; and
4. (iv)${\lambda }_{i}^{*}>0$ if ${\lambda }_{i}^{*}$ corresponds to a constraint ${\stackrel{^}{a}}_{i}^{\mathrm{T}}{x}^{*}\ge {\stackrel{^}{b}}_{i}$;
${\lambda }_{i}^{*}<0$ if ${\lambda }_{i}^{*}$ corresponds to a constraint ${\stackrel{^}{a}}_{i}^{\mathrm{T}}{x}^{*}\le {\stackrel{^}{b}}_{i}$.
The sign of ${\lambda }_{i}^{*}$ is immaterial for equality constraints, which by definition are always active.

#### 2.4.4Nonlinearly-constrained minimization

For nonlinearly-constrained problems, much of the terminology is defined exactly as in the linearly-constrained case. To simplify the notation, let us assume that all nonlinear constraints are in the form $c\left(x\right)\ge 0$. The set of active constraints at $x$ again means the set of constraints that hold as equalities at $x$, with corresponding definitions of $\stackrel{^}{c}$ and $\stackrel{^}{A}$: the vector $\stackrel{^}{c}\left(x\right)$ contains the active constraint functions, and the columns of $\stackrel{^}{A}\left(x\right)$ are the gradient vectors of the active constraints. As before, $Z$ is defined in terms of $\stackrel{^}{A}\left(x\right)$ as a matrix such that:
 $A^TZ=0; ZTZ=I$
where the dependence on $x$ has been suppressed for compactness.
The projected gradient vector ${g}_{Z}\left(x\right)$ is the vector ${Z}^{\mathrm{T}}\nabla f\left(x\right)$. At the solution ${x}^{*}$ of a nonlinearly-constrained problem, the projected gradient must be zero, which implies the existence of Lagrange multipliers corresponding to the active constraints, i.e., $\nabla f\left({x}^{*}\right)=\stackrel{^}{A}\left({x}^{*}\right){\lambda }^{*}$.
The Lagrangian function is given by:
 $L(x,λ)=f(x)-λTc^(x).$
We define ${g}_{L}\left(x\right)$ as the gradient of the Lagrangian function; ${G}_{L}\left(x\right)$ as its Hessian matrix, and ${\stackrel{^}{G}}_{L}\left(x\right)$ as its projected Hessian matrix, i.e., ${\stackrel{^}{G}}_{L}={Z}^{\mathrm{T}}{G}_{L}Z$.
Sufficient conditions for ${x}^{*}$ to be the solution of a nonlinearly-constrained problem are:
1. (i)${x}^{*}$ is feasible, and $\stackrel{^}{c}\left({x}^{*}\right)=0$; and
2. (ii)$‖{g}_{Z}\left({x}^{*}\right)‖=0$, or, equivalently, $\nabla f\left({x}^{*}\right)=\stackrel{^}{A}\left({x}^{*}\right){\lambda }^{*}$; and
3. (iii)${\stackrel{^}{G}}_{L}\left({x}^{*}\right)$ is positive definite; and
4. (iv)${\lambda }_{i}^{*}>0$ if ${\lambda }_{i}^{*}$ corresponds to a constraint of the form ${\stackrel{^}{c}}_{i}\ge 0$.
The sign of ${\lambda }_{i}^{*}$ is immaterial for equality constraints, which by definition are always active.
Note that condition (ii) implies that the projected gradient of the Lagrangian function must also be zero at ${x}^{*}$, since the application of ${Z}^{\mathrm{T}}$ annihilates the matrix $\stackrel{^}{A}\left({x}^{*}\right)$.

### 2.5Background to Optimization Methods

All the algorithms contained in this chapter generate an iterative sequence $\left\{{x}^{\left(k\right)}\right\}$ that converges to the solution ${x}^{*}$ in the limit, except for some special problem categories (i.e., linear and quadratic programming). To terminate computation of the sequence, a convergence test is performed to determine whether the current estimate of the solution is an adequate approximation. The convergence tests are discussed in Section 2.7.
Most of the methods construct a sequence $\left\{{x}^{\left(k\right)}\right\}$ satisfying:
 $x (k+1) =x (k) +α (k) p (k) ,$
where the vector ${p}^{\left(k\right)}$ is termed the direction of search, and ${\alpha }^{\left(k\right)}$ is the steplength. The steplength ${\alpha }^{\left(k\right)}$ is chosen so that $f\left({x}^{\left(k+1\right)}\right) and is computed using one of the techniques for one-dimensional optimization referred to in Section 2.5.1.

#### 2.5.1One-dimensional optimization

The Library contains two special routines for minimizing a function of a single variable. Both routines are based on safeguarded polynomial approximation. One routine requires function evaluations only and fits a quadratic polynomial whilst the other requires function and gradient evaluations and fits a cubic polynomial. See Section 4.1 of Gill et al. (1981).

#### 2.5.2Methods for unconstrained optimization

The distinctions between methods arise primarily from the need to use varying levels of information about derivatives of $f\left(x\right)$ in defining the search direction. We describe three basic approaches to unconstrained problems, which may be extended to other problem categories. Since a full description of the methods would fill several volumes, the discussion here can do little more than allude to the processes involved and direct you to other sources for a full explanation.
1. (a)Newton-type Methods (Modified Newton Methods)
Newton-type methods use the Hessian matrix ${\nabla }^{2}f\left({x}^{\left(k\right)}\right)$, or its finite difference approximation, to define the search direction. The routines in the Library either require a subroutine that computes the elements of the Hessian directly or they approximate them by finite differences.
Newton-type methods are the most powerful methods available for general problems and will find the minimum of a quadratic function in one iteration. See Sections 4.4 and 4.5.1 of Gill et al. (1981).
2. (b)Quasi-Newton Methods
Quasi-Newton methods approximate the Hessian ${\nabla }^{2}f\left({x}^{\left(k\right)}\right)$ by a matrix ${B}^{\left(k\right)}$ which is modified at each iteration to include information obtained about the curvature of $f$ along the current search direction ${p}^{\left(k\right)}$. Although not as robust as Newton-type methods, quasi-Newton methods can be more efficient because the Hessian is not computed directly, or approximated by finite differences. Quasi-Newton methods minimize a quadratic function in $n$ iterations, where $n$ is the number of variables. See Section 4.5.2 of Gill et al. (1981).
Unlike Newton-type and quasi-Newton methods, conjugate-gradient methods do not require the storage of an $n×n$ matrix and so are ideally suited to solve large problems.

#### 2.5.3Methods for nonlinear least squares problems

These methods are similar to those for general nonlinear optimization but exploit the special structure of the Hessian matrix to give improved computational efficiency.
Since
 $f(x)=∑i=1mri2(x)$
the Hessian matrix is of the form
 $∇2f(x) = 2 (J(x)TJ(x)+∑i=1mri(x)∇2ri(x)) ,$
where $J\left(x\right)$ is the Jacobian matrix of $r\left(x\right)$.
In the neighbourhood of the solution, $‖r\left(x\right)‖$ is often small compared to $‖J{\left(x\right)}^{\mathrm{T}}J\left(x\right)‖$ (for example, when $r\left(x\right)$ represents the goodness-of-fit of a nonlinear model to observed data). In such cases, $2J{\left(x\right)}^{\mathrm{T}}J\left(x\right)$ may be an adequate approximation to ${\nabla }^{2}f\left(x\right)$, thereby avoiding the need to compute or approximate second derivatives of $\left\{{r}_{i}\left(x\right)\right\}$. See Section 4.7 of Gill et al. (1981).

#### 2.5.4Methods for handling constraints

There are two main approaches for handling constraints in optimization algorithms – the active-set sequential quadratic programming method (or just SQP) and the interior point method (IPM). It is important to understand their very distinct features as both algorithms complement each other. The easiest method of comparison is to look at how the inequality constraints are treated and how the solver approaches the optimal solution (the progress of the KKT optimality measures: optimality, feasibility, complementarity).
Inequality constraints are the hard part of the optimization because of their ‘twofold nature’. If the optimal solution strictly satisfies the inequality, i.e., the optimal point is in the interior of the constraint, the inequality constraint does not influence the result and could be removed from the model. On the other hand, if the inequality is satisfied as an equality (is active at the solution), the constraint must be present and could be treated as an equality from the very beginning. This is expressed by the complementarity in KKT conditions.
Solvers, based on the active-set method, solve at each iteration a quadratic approximation of the original problem; they try to estimate which constraints need to be kept (are active) and which can be ignored. A practical consequence is that the algorithm partly ‘walks along the boundary’ of the feasible region given by the constraints. The iterates are thus feasible early on with regard to all linear constraints (and a local linearization of the nonlinear constraints) which is preserved through the iterations. The complementarity is satisfied by default, and once the active set is determined correctly and optimality is within the tolerance, the solver finishes. The number of iterations might be high but each is relatively cheap. See Chapter 6 of Gill et al. (1981) for further details.
In contrast, an interior point method generates iterations that avoid the boundary defined by the inequality constraints. As the solver progresses the iterates are allowed to get closer and closer to the boundary and converge to the optimal solution which might lie on the boundary. From the practical point of view, IPM typically requires only tens of iterations. Each iteration consists of solving a large linear system of equations taking into account all variables and constraints, so each iteration is fairly expensive. All three optimality measures are reduced simultaneously.
In many cases it is difficult to predict which of the algorithms will behave better on a particular problem, however, some initial guidance can be given in the following table:
Can exploit second derivatives and its structure
Efficient on unconstrained or loosely constrained problems
Efficient also for (both convex and nonconvex) quadratic problems (QP)
Better use of multi-core architecture (in multithreaded implementations)
New interface, easier to use
Stay feasible with regard to linear constraints through most of the iterations
Very efficient for highly constrained problems
Better results on pathological problems in our experience
Generally requires less function evaluations (efficient for problems with expensive function evaluations)
Requires first derivatives but can work only with function values
Can capitalize on a good initial point
Allows warm starts (good for a sequence of similar problems)
Infeasibility detection
Unless some of the specific features are required which are offered only by one algorithm, the initial decision should be based on the availability of the derivatives of the problem and the number of constraints (for example, expressed as a ratio between the numbers of variables and the sum of the number of linear and nonlinear constraints). Readiness of exact second derivatives is a clear advantage for IPM so unless the number of constraints is close to the number of variables, IPM will probably work better. Similarly, if a large-scale problem has relatively few constraints (e.g., less than $40$%) IPM might be more successful, especially as the problem gets bigger. On the other hand, if no derivatives are available, either the SQP or a specialized algorithm from the Library (see Derivative-free Optimization, Section 2.2.5) needs to be used. With more and more constraints SQP might be faster. For problems which do not fall in either of the categories, it is not easy to anticipate which solver will work better and some experimentation might be required.

#### 2.5.5Methods for handling multi-objective optimization

Suppose we have objective functions ${f}_{i}\left(x\right)$, $i>1$, all of which we need to minimize at the same time. There are two main approaches to this problem:
1. (i)Combine the individual objectives into one composite objective. Typically, this might be a weighted sum of the objectives, e.g.,
 $w1 f1(x) + w2 f2(x) + ⋯ + wn fn(x) .$
Here you choose the weights to express the relative importance of the corresponding objective. Ideally each of the ${f}_{i}\left(x\right)$ should be of comparable size at a solution.
2. (ii)Order the objectives in order of importance. Suppose ${f}_{\mathit{i}}$ are ordered such that ${f}_{\mathit{i}}\left(x\right)$ is more important than ${f}_{\mathit{i}+1}\left(x\right)$, for $\mathit{i}=1,2,\dots ,n-1$. Then in the lexicographical approach to multi-objective optimization a sequence of subproblems are solved. Firstly, solve the problem for objective function ${f}_{1}\left(x\right)$ and denote by ${r}_{1}$ the value of this minimum. If $\left(\mathit{i}-1\right)$ subproblems have been solved with results ${r}_{\mathit{i}-1}$ then subproblem $\mathit{i}$ becomes $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({f}_{\mathit{i}}\left(x\right)\right)$ subject to ${r}_{k}\le {f}_{k}\left(x\right)\le {r}_{k}$, for $\mathit{k}=1,2,\dots ,i-1$ plus the other constraints.
Clearly the bounds on ${f}_{k}$ might be relaxed at your discretion.
In general, if NAG routines from Chapter E04 are used then only local minima are found. This means that a better solution to an individual objective might be found without worsening the optimal solutions to the other objectives. Ideally you seek a Pareto solution; one in which an improvement in one objective can only be achieved by a worsening of another objective.
To obtain a Pareto solution routines from Chapter E05 might be used or, alternatively, a pragmatic attempt to derive a global minimum might be tried (see e05ucf). In this approach, a variety of different minima are computed for each subproblem by starting from a range of different starting points. The best solution achieved is taken to be the global minimum. The more starting points chosen the greater confidence you might have in the computed global minimum.

### 2.6Scaling

Scaling (in a broadly defined sense) often has a significant influence on the performance of optimization methods.
Since convergence tolerances and other criteria are necessarily based on an implicit definition of ‘small’ and ‘large’, problems with unusual or unbalanced scaling may cause difficulties for some algorithms.
Although there are currently no user-callable scaling routines in the Library, scaling can be performed automatically in routines which solve sparse LP, QP or NLP problems and in some dense solver routines. Such routines have an optional parameter ‘Scale Option’ which you can set; see individual routine documents for details.
The following sections present some general comments on problem scaling.

#### 2.6.1Transformation of variables

One method of scaling is to transform the variables from their original representation, which may reflect the physical nature of the problem, to variables that have certain desirable properties in terms of optimization. It is generally helpful for the following conditions to be satisfied:
1. (i)the variables are all of similar magnitude in the region of interest;
2. (ii)a fixed change in any of the variables results in similar changes in $f\left(x\right)$. Ideally, a unit change in any variable produces a unit change in $f\left(x\right)$;
3. (iii)the variables are transformed so as to avoid cancellation error in the evaluation of $f\left(x\right)$.
Normally, you should restrict yourself to linear transformations of variables, although occasionally nonlinear transformations are possible. The most common such transformation (and often the most appropriate) is of the form
 $xnew=Dxold,$
where $D$ is a diagonal matrix with constant coefficients. Our experience suggests that more use should be made of the transformation
 $xnew=Dxold+v,$
where $v$ is a constant vector.
Consider, for example, a problem in which the variable ${x}_{3}$ represents the position of the peak of a Gaussian curve to be fitted to data for which the extreme values are $150$ and $170$;, therefore, ${x}_{3}$ is known to lie in the range $150$$170$. One possible scaling would be to define a new variable ${\overline{x}}_{3}$, given by
 $x¯3=x3170.$
A better transformation, however, is given by defining ${\overline{x}}_{3}$ as
 $x¯3=x3-16010.$
Frequently, an improvement in the accuracy of evaluation of $f\left(x\right)$ can result if the variables are scaled before the routines to evaluate $f\left(x\right)$ are coded. For instance, in the above problem just mentioned of Gaussian curve-fitting, ${x}_{3}$ may always occur in terms of the form $\left({x}_{3}-{x}_{m}\right)$, where ${x}_{m}$ is a constant representing the mean peak position.

#### 2.6.2Scaling the objective function

The objective function has already been mentioned in the discussion of scaling the variables. The solution of a given problem is unaltered if $f\left(x\right)$ is multiplied by a positive constant, or if a constant value is added to $f\left(x\right)$. It is generally preferable for the objective function to be of the order of unity in the region of interest; thus, if in the original formulation $f\left(x\right)$ is always of the order of ${10}^{+5}$ (say), then the value of $f\left(x\right)$ should be multiplied by ${10}^{-5}$ when evaluating the function within an optimization routine. If a constant is added or subtracted in the computation of $f\left(x\right)$, usually it should be omitted, i.e., it is better to formulate $f\left(x\right)$ as ${x}_{1}^{2}+{x}_{2}^{2}$ rather than as ${x}_{1}^{2}+{x}_{2}^{2}+1000$ or even ${x}_{1}^{2}+{x}_{2}^{2}+1$. The inclusion of such a constant in the calculation of $f\left(x\right)$ can result in a loss of significant figures.

#### 2.6.3Scaling the constraints

A ‘well scaled’ set of constraints has two main properties. Firstly, each constraint should be well-conditioned with respect to perturbations of the variables. Secondly, the constraints should be balanced with respect to each other, i.e., all the constraints should have ‘equal weight’ in the solution process.
The solution of a linearly- or nonlinearly-constrained problem is unaltered if the $i$th constraint is multiplied by a positive weight ${w}_{i}$. At the approximation of the solution determined by an active-set solver, any active linear constraints will (in general) be satisfied ‘exactly’ (i.e., to within the tolerance defined by machine precision) if they have been properly scaled. This is in contrast to any active nonlinear constraints, which will not (in general) be satisfied ‘exactly’ but will have ‘small’ values (for example, ${\stackrel{^}{g}}_{1}\left({x}^{*}\right)={10}^{-8}$, ${\stackrel{^}{g}}_{2}\left({x}^{*}\right)={-10}^{-6}$, and so on). In general, this discrepancy will be minimized if the constraints are weighted so that a unit change in $x$ produces a similar change in each constraint.
A second reason for introducing weights is related to the effect of the size of the constraints on the Lagrange multiplier estimates and, consequently, on the active-set strategy. This means that different sets of weights may cause an algorithm to produce different sequences of iterates. Additional discussion is given in Gill et al. (1981).

### 2.7Analysis of Computed Results

#### 2.7.1Convergence criteria

The convergence criteria inevitably vary from routine to routine, since in some cases more information is available to be checked (for example, is the Hessian matrix positive definite?), and different checks need to be made for different problem categories (for example, in constrained minimization it is necessary to verify whether a trial solution is feasible). Nonetheless, the underlying principles of the various criteria are the same; in non-mathematical terms, they are:
1. (i)is the sequence $\left\{{x}^{\left(k\right)}\right\}$ converging?
2. (ii)is the sequence $\left\{{f}^{\left(k\right)}\right\}$ converging?
3. (iii)are the necessary and sufficient conditions for the solution satisfied?
The decision as to whether a sequence is converging is necessarily speculative. The criterion used in the present routines is to assume convergence if the relative change occurring between two successive iterations is less than some prescribed quantity. Criterion (iii) is the most reliable but often the conditions cannot be checked fully because not all the required information may be available.

#### 2.7.2Checking results

Little a priori guidance can be given as to the quality of the solution found by a nonlinear optimization algorithm, since no guarantees can be given that the methods will not fail. Therefore, you should always check the computed solution even if the routine reports success. Frequently a ‘solution’ may have been found even when the routine does not report a success. The reason for this apparent contradiction is that the routine needs to assess the accuracy of the solution. This assessment is not an exact process and consequently may be unduly pessimistic. Any ‘solution’ is in general only an approximation to the exact solution, and it is possible that the accuracy you have specified is too stringent.
Further confirmation can be sought by trying to check whether or not convergence tests are almost satisfied, or whether or not some of the sufficient conditions are nearly satisfied. When it is thought that a routine has returned a nonzero value of ifail only because the requirements for ‘success’ were too stringent it may be worth restarting with increased convergence tolerances.
For constrained problems, check whether the solution returned is feasible, or nearly feasible; if not, the solution returned is not an adequate solution.
Confidence in a solution may be increased by restarting the solver with a different initial approximation to the solution. See Section 8.3 of Gill et al. (1981) for further information.

#### 2.7.3Monitoring progress

Many of the routines in the chapter have facilities to allow you to monitor the progress of the minimization process, and you are encouraged to make use of these facilities. Monitoring information can be a great aid in assessing whether or not a satisfactory solution has been obtained, and in indicating difficulties in the minimization problem or in the ability of the routine to cope with the problem.
The behaviour of the function, the estimated solution and first derivatives can help in deciding whether a solution is acceptable and what to do in the event of a return with a nonzero value of ifail.

#### 2.7.4Confidence intervals for least squares solutions

When estimates of the parameters in a nonlinear least squares problem have been found, it may be necessary to estimate the variances of the parameters and the fitted function. These can be calculated from the Hessian of the objective $f\left(x\right)$ at the solution.
In many least squares problems, the Hessian is adequately approximated at the solution by $G=2{J}^{\mathrm{T}}J$ (see Section 2.5.3). The Jacobian, $J$, or a factorization of $J$ is returned by all the comprehensive least squares routines and, in addition, e04ycf can be used to estimate variances of the parameters following the use of most of the nonlinear least squares routines, in the case that $G=2{J}^{\mathrm{T}}J$ is an adequate approximation.
Let $H$ be the inverse of $G$, and $S$ be the sum of squares, both calculated at the solution $\overline{x}$; an unbiased estimate of the variance of the $i$th parameter ${x}_{i}$ is
 $var⁡x¯i=2S m-n Hii$
and an unbiased estimate of the covariance of ${\overline{x}}_{i}$ and ${\overline{x}}_{j}$ is
 $covar(x¯i,x¯j)=2S m-n Hij.$
If ${x}^{*}$ is the true solution then the $100\left(1-\beta \right)%\text{}$ confidence interval on $\overline{x}$ is
 $x¯i-var⁡x¯i. t(1-β/2,m-n)
where ${t}_{\left(1-\beta /2,m-n\right)}$ is the $100\left(1-\beta \right)/2$ percentage point of the $t$-distribution with $m-n$ degrees of freedom.
In the majority of problems, the residuals ${r}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,m$, contain the difference between the values of a model function $\varphi \left(z,x\right)$ calculated for $m$ different values of the independent variable $z$, and the corresponding observed values at these points. The minimization process determines the parameters, or constants $x$, of the fitted function $\varphi \left(z,x\right)$. For any value, $\overline{z}$, of the independent variable $z$, an unbiased estimate of the variance of $\varphi$ is
 $var⁡ϕ=2S m-n ∑i=1n ∑j=1n [ ∂ϕ ∂xi ] z¯ [ ∂ϕ ∂xj ] z¯ Hij.$
The $100\left(1-\beta \right)%$ confidence interval on $f$ at the point $\overline{z}$ is
 $ϕ(z¯,x¯)-var⁡ϕ.t(β/2,m-n)<ϕ(z¯,x*)<ϕ(z¯,x¯)+var⁡ϕ.t(β/2,m-n).$
For further details on the analysis of least squares solutions see Bard (1974) and Wolberg (1967).

## 3Recommendations on Choice and Use of Available Routines

The choice of routine depends on several factors: the type of problem (LP, NLP, unconstrained, etc.); whether or not a problem is sparse; the level of derivative information available (function values only, etc.); whether or not the routine is to be used in a multithreaded environment; and other factors. Not all choices are catered for in the current version of the Library.

### 3.1NAG Optimization Modelling Suite

Mark 26 of the Library introduced the NAG optimization modelling suite, a suite of routines which allows you to define and solve various optimization problems in a uniform manner. The first key feature of the suite is that the definition of the optimization problem and the call to the solver have been separated so it is possible to set up a problem in the same way for different solvers. The second feature is that the problem representation is built up from basic components (building blocks) as defined in Sections 2.2.1 and 2.2.2 (for example, a QP problem is composed of a quadratic objective, simple bounds and linear constraints), therefore, different types of problems reuse the same routines for their common parts.
A connecting element to all routines in the suite is a handle, a pointer to an internal data structure, which is passed among the routines. It holds all information about the problem, the solution and the solver. Each handle should go through four stages in its life: initialization, problem formulation, problem solution and deallocation.
The initialization is performed by e04raf which creates an empty problem with $n$ decision variables or alternatively by e04saf which loads the whole model from a file. A call to e04rzf marks the end of the life of the handle as it deallocates all the allocated memory and data within the handle and destroys the handle itself. During this time the handle must only be modified by the provided routines. Working with a handle which has not been properly initialized will result in ${\mathbf{ifail}}={\mathbf{1}}$ (uniform across the suite) and is potentially very dangerous as it may cause unpredictable behaviour.
After the initialization of an empty problem, the problem formulation should be composed of the basic building blocks. A high degree of freedom is given at this stage. Various types of objective functions and constraints can be defined. Furthermore, editing of the formulation is also supported, which is useful when you need to redefine parts of the problem and resolve. More details on the routines of the suite are as follows.
The objective may be defined as one of the following:
• e04ref – a linear objective as a dense vector;
• e04rff, e04rsf and e04rtf – a quadratic objective or a sparse linear objective;
• e04rgf – a nonlinear objective function;
• e04rmf – a nonlinear least squares objective function.
The routines for constraint definition are:
• e04rhf – simple bounds;
• e04rjf – linear constraints;
• e04rsf and e04rtf – quadratic constraints;
• e04rkf – nonlinear constraints;
• e04rlf – second derivatives for the objective and/or constraints;
• e04rbf – quadratic cone constraints;
• e04rnf – linear matrix inequalities;
• e04rpf – quadratic terms for bilinear matrix inequalities.
There are various ways in which the formulation may be edited. Multiple calls of the routines listed above either extend the formulation (e.g., multiple blocks of linear constraints may be defined) or redefine the component (for instance, a newly defined objective function will overwrite the existing one). In addition, routines are provided to manipulate the formulation further. For example, new variables may be added, a subset of variables within the model may be fixed, existing variables or constraints may be temporarily removed (disabled) from the model and then be brought back (enabled) later. You may modify bounds of an individual constraint, or a coefficient in linear objective or constraint, etc. However, the formulation may not be altered while a solver is running, otherwise ${\mathbf{ifail}}={\mathbf{2}}$ will be returned. The following is a list of editing routines and their functionalities.
• e04taf – add new variables;
• e04rcf – specify variable properties, particularly if they are continuous, binary or integer;
• e04tcf – disable (temporarily remove) an existing variable or constraint from the model;
• e04tbf – enable (bring back) a variable or constraint disabled by e04tcf;
• e04tdf – modify bounds of an existing variable or constraint, fix a variable to a certain value;
• e04tef – modify the coefficient of a single variable in the linear objective;
• e04tjf – modify a single coefficient in a linear constraint;
These routines may be called in an arbitrary order, however, a call to e04rnf must precede a call to e04rpf for the matrix inequalities with bilinear terms and the nonlinear objective or constraints (e04rgf or e04rkf) must precede the definition of the second derivatives by e04rlf. Also note that a redefinition of the nonlinear objective function or constraints removes their previously defined Hessians. For further details, please refer to the documentation of the individual routines.
The suite also includes the following service routines:
• e04ryf – query/printing routine;
• e04zmf – supply an optional parameter from a character string;
• e04zpf – supply one or more optional parameters from a file;
• e04znf – get the current value of an optional parameter;
• e04rxf – read or write information into the handle via real array, for instance, extract intermediate results during solving;
• e04rwf – read or write information into the handle via integer array, for instance, extract the problem size.
When the problem is fully formulated, the handle can be passed to a solver which is compatible with the defined problem. You are free to switch between compatible solvers or resolve after a modification of the formulation, optional parameters and/or starting points. If a solver cannot deal with the given formulation it will return ${\mathbf{ifail}}={\mathbf{2}}$. The NAG optimization modelling suite comprises of the following solvers:
• e04fff, e04fgf – derivative-free nonlinear least squares with box constraints;
• e04kff – first order active-set method for box constrained nonlinear optimization;
• e04ggf – bound constrained nonlinear least squares using derivatives;
• e04jdf, e04jef – derivative-free box constrained nonlinear optimization;
• e04mtf – linear programming solver based on an interior point method;
• e04ptf – Second-order Cone Programming (SOCP) and convex quadratically constrained quadratic programming;
• e04stf – nonlinear programming based on an interior point method;
• e04svf – semidefinite programming optionally also with bilinear matrix inequalities.
A diagram of the life cycle of the handle is depicted in Figure 2.
Figure 2

### 3.2Reverse Communication Routines

Any solver dealing with nonlinear functions needs a way to obtain function values (or derivatives) at each of the trial points during the optimization run. Typically, the objective function and nonlinear constraints (if any) would be written by you as subroutines to a very rigid format as described in the relevant routine document and passed to the solver as callbacks. You call the solver once and the solver calls your callbacks as required. That's the simplest solution and it works in a majority of cases. However, sometimes an alternative in the form of reverse communication routines might be helpful.
Reverse communication routines are called in a loop. The solver stops when it needs to evaluate your functions, the values are computed outside of the solver, and the routine is called again with latest values passed in on the argument list. This loop continues until the solver finishes. Such approach is most beneficial when the solver is being called from a computer language which does not fully support procedure arguments in a way that is compatible with the Library. It is also useful if a large amount of data needs to be transmitted into the routine. See Section 7 in How to Use the NAG Library for more information about reverse communication routines.
This chapter currently offers the following reverse communication routines: e04fgf, e04jef and e04uff/​e04ufa.

### 3.3Choosing Between Variant Routines for Some Problems

As evidenced by the wide variety of routines available in Chapter E04, it is clear that no single algorithm can solve all optimization problems. It is important to identify the type of problem (see Section 2.2.3) and to try to match the problem to the most suitable routine. The decision trees in Section 4 can help you identify the best solver for your problem.
Sometimes in Chapter E04 more than one routine is available to solve precisely the same optimization problem. If their differences lay in the underlying method, refer to the sections above. Section 2.5.4 discusses key features of interior point methods (represented by e04stf) and active-set SQP methods (for example, e04ugf/​e04uga or e04vhf). Alternatively, there are routines implementing slightly different variants of the same method (such as e04ucf/​e04uca and e04wdf). Experience shows that in this case although both routines can usually solve the same problem and get similar results, sometimes one routine will be faster, sometimes one might find a different local minimum to the other, or, in difficult cases, one routine may obtain a solution when the other one fails.
After using one of these routines, if the results obtained are unacceptable, it may be worthwhile trying the other routine. In the absence of any other information, in the first instance you are recommended to try using e04ucf/​e04uca, and if that proves unsatisfactory, try using e04wdf. Although the algorithms used are very similar, the two routines each have slightly different optional parameters which may allow the course of the computation to be altered in different ways.
Other pairs of routines which solve the same kind of problem are e04nqf (recommended first choice) or e04nkf/​e04nka, for sparse quadratic or linear programming problems, and e04vhf (recommended) or e04ugf/​e04uga, for sparse nonlinear programming. In these cases the argument lists are not as similar as e04ucf/​e04uca or e04wdf, but the same considerations apply.

Some of the routines in this chapter come in pairs, with each routine in the pair having exactly the same functionality, except that one of them has additional arguments in order to make it safe for use in multithreaded applications. The routine that is safe for use in multithreaded applications has an ‘a’ as the last character in the name, in place of the usual ‘f’. An example of such a pair is e04aba and e04abf.
All other routines in this chapter are thread safe.

### 3.5Easy-to-use and Comprehensive Routines

Some older routines appear in the Library in two forms: a comprehensive form and an easy-to-use form. The purpose of the easy-to-use forms is to make the routine simpler to use by including in the calling sequence only those arguments absolutely essential to the definition of the problem, as opposed to arguments relevant to the solution method. If you are an experienced user the comprehensive routines have additional arguments which enable you to improve their efficiency by ‘tuning’ the method to a particular problem. In the easy-to-use routines, these extra arguments are determined by fixing them at a known safe and reasonably efficient value.
Solvers introduced since Mark 12 of the Library use optional parameters instead.

### 3.6Checking the Derivatives

One of the most common errors in the use of optimization routines is that user-supplied subroutines do not evaluate the relevant partial derivatives correctly. Because exact gradient information normally enhances efficiency in all areas of optimization, you are encouraged to provide analytical derivatives whenever possible. However, mistakes in the computation of derivatives can result in serious and obscure run-time errors. Consequently, there are mechanisms provided in the Library to perform derivative checks and you are highly encouraged to use them. However, note that the checks are not infallible.
Such checks may be turned on for recent solvers (such as, e04kff, e04stf, e04ucf/​e04uca or e04vhf) directly by optional parameters (see for example, Verify Derivatives, Verify Level or Verify). For older solvers, there are service routines provided for this task. These routines are inexpensive to use in terms of the number of calls they require to user-supplied subroutines.
The appropriate checking routines are as follows:
Minimization routine Checking routine(s)
e04kdf e04hcf
e04lbf e04hcf and e04hdf
e04gbf e04yaf
e04gdf e04yaf
e04hef e04yaf and e04ybf
A second type of service routine computes a set of finite differences to be used when approximating first derivatives. Such differences are required as input arguments by some routines that use only function evaluations.
e04ycf estimates selected elements of the variance-covariance matrix for the computed regression parameters following the use of a nonlinear least squares routine.
e04xaf/​e04xaa estimates the gradient and Hessian of a function at a point, given a routine to calculate function values only, or estimates the Hessian of a function at a point, given a routine to calculate function and gradient values.

### 3.7Function Evaluations at Infeasible Points

All the solvers for constrained problems based on an active-set method will ensure that any evaluations of the objective function occur at points which approximately (up to the given tolerance) satisfy any simple bounds or linear constraints.
There is no attempt to ensure that the current iteration satisfies any nonlinear constraints. If you wish to prevent your objective function being evaluated outside some known region (where it may be undefined or not practically computable), you may try to confine the iteration within this region by imposing suitable simple bounds or linear constraints (but beware as this may create new local minima where these constraints are active).
Note also that some routines allow you to return the argument (iflag, inform, mode or status) with a negative value to indicate when the objective function (or nonlinear constraints where appropriate) cannot be evaluated. In case the routine cannot recover (e.g., cannot find a different trial point), it forces an immediate clean exit from the routine.

### 3.8Related Problems

Apart from the standard types of optimization problem, there are other related problems which can be solved by routines in this or other chapters of the Library.
h02bbf solves dense integer LP problems, h02cbf solves dense integer QP problems, h02cef solves sparse integer QP problems, h02daf solves dense mixed integer NLP problems and h03abf solves a special type of problem known as a ‘transportation’ problem.
Several routines in Chapters F04 and F08 solve linear least squares problems, i.e., $\mathrm{minimize}\sum _{i=1}^{m}{r}_{i}{\left(x\right)}^{2}$ where ${r}_{i}\left(x\right)={b}_{i}-\sum _{j=1}^{n}{a}_{ij}{x}_{j}$.
e02gaf solves an overdetermined system of linear equations in the ${l}_{1}$ norm, i.e., minimizes $\sum _{i=1}^{m}|{r}_{i}\left(x\right)|$, with ${r}_{i}$ as above, and e02gbf solves the same problem subject to linear inequality constraints.
e02gcf solves an overdetermined system of linear equations in the ${l}_{\infty }$ norm, i.e., minimizes $\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}|{r}_{i}\left(x\right)|$, with ${r}_{i}$ as above.
Chapter E05 contains routines for global minimization.
Section 2.5.5 describes how a multi-objective optimization problem might be addressed using routines from this chapter and from Chapter E05.

## 4Decision Trees

This section helps you to identify the best solver for your problem. First of all, establish the problem type by referring to Table 1 below and Section 2.2. Then navigate through the particular decision tree to the recommended routines. If more than one routine is listed, their order suggests which one to try first. Also see Section 3.3 for further discussion about choosing between variant routines.
Table 1
Decision Matrix
no objective linear quadratic nonlinear sum of squares
unconstrained QP
See Tree 2
NLP
See Tree 3
LSQ
See Tree 4
simple bounds LP
See Tree 1
LP
See Tree 1
QP
See Tree 2
NLP
See Tree 3
LSQ
See Tree 4
linear LP
See Tree 1
LP
See Tree 1
QP
See Tree 2
NLP
See Tree 3
LSQ
See Tree 4
See Tree 2
QCQP
See Tree 2
QCQP
See Tree 2
NLP
See Tree 3
LSQ
See Tree 4
nonlinear NLP
See Tree 3
NLP
See Tree 3
NLP
See Tree 3
NLP
See Tree 3
LSQ
See Tree 4
e04ptf
SOCP
e04ptf
matrix inequalities SDP
e04svf
SDP
e04svf
SDP
e04svf

### Tree 1: Linear Programming (LP)

 Is the problem sparse/large-scale? e04mtf, e04nqf, e04nkf/​e04nka yes no e04mff/​e04mfa, e04ncf/​e04nca

 Are there quadratic constraints? Is the problem convex? e04ptf, e04stf yes yes no no e04stf, e04vhf, e04ugf/​e04uga Is the problem sparse/large-scale? Is it convex? e04nqf, e04ptf, e04stf, e04nkf/​e04nka yes yes no no e04stf, e04vhf, e04ugf/​e04uga Is it convex? e04ncf/​e04nca yes no e04nff/​e04nfa

### Tree 3: Nonlinear Programming (NLP)

 Is the problem sparse/large-scale? Is it unconstrained or only with simple bounds? Are first derivatives available? e04kff, e04stf, e04vhf, e04uga yes yes yes no no no e04kff, e04vhf, e04uga Are first derivatives available? Are second derivatives available? e04stf yes yes no no e04vhf, e04stf, e04uga e04vhf, e04uga Are there linear or nonlinear constraints? e04uca, e04ufa, e04wdf yes no Is there only one variable? Are first derivatives available? e04bba yes yes no no e04aba Is it unconstrained with the objective with many discontinuities? e04cbf or e05saf yes no Are first derivatives available? Are second derivatives available? Are you an experienced user? e04lbf yes yes yes no no no e04lyf Are many function evaluations problematic? Are you an experienced user? e04uca, e04ufa, e04wdf yes yes no no e04kyf Are you an experienced user? e04kdf yes no e04kzf Is the objective expensive to evaluate or noisy? e04jdf, e04jef yes no Are you an experienced user? e04uca, e04ufa, e04wdf yes no e04jyf

### Tree 4: Least squares problems (LSQ)

 Is the objective sum of squared linear functions and no nonlinear constraints? Are there linear constraints? e04ncf/​e04nca yes yes no no Are there simple bounds? e04pcf, e04ncf/​e04nca yes no Chapters F04, F07 or F08 or e04pcf, e04ncf/​e04nca Are there linear or nonlinear constraints? e04usf/​e04usa yes no Are there simple bounds? Are first derivatives available? e04ggf, e04usf/​e04usa yes yes no no e04fff, e04fgf Are first derivatives available? Are second derivatives available? Are you an experienced user? e04ggf, e04hef yes yes yes no no no e04ggf, e04hyf Are many function evaluations problematic? Are you an experienced user? e04gbf yes yes no no e04gyf Are you an experienced user? e04gdf, e04ggf yes no e04ggf, e04gzf e04fff, e04fgf, e04fcf

## 5Functionality Index

 Linear programming (LP),
 dense,
 active-set method/primal simplex,
 alternative 1 e04mff
 alternative 2 e04ncf
 sparse,
 interior point method (IPM) e04mtf
 active-set method/primal simplex,
 recommended (see Section 3.3) e04nqf
 alternative e04nkf
 dense,
 active-set method for (possibly nonconvex) QP problem e04nff
 active-set method for convex QP problem e04ncf
 sparse,
 active-set method sparse convex QP problem,
 recommended (see Section 3.3) e04nqf
 alternative e04nkf
 interior point method (IPM) for (possibly nonconvex) QP problems e04stf
 Second-order Cone Programming (SOCP),
 dense or sparse,
 interior point method e04ptf
 Semidefinite programming (SDP),
 generalized augmented Lagrangian method for SDP and SDP with bilinear matrix inequalities (BMI-SDP) e04svf
 Nonlinear programming (NLP),
 dense,
 direct communication,
 recommended (see Section 3.3) e04ucf
 alternative e04wdf
 reverse communication e04uff
 sparse,
 interior point method (IPM) e04stf
 recommended (see Section 3.3) e04vhf
 alternative e04ugf
 Nonlinear programming (NLP) – derivative-free optimization (DFO),
 model-based method for bound-constrained optimization,
 reverse communication e04jef
 direct communication e04jdf
 Nelder–Mead simplex method for unconstrained optimization e04cbf
 Nonlinear programming (NLP) – special cases,
 unidimensional optimization (one-dimensional) with bound constraints,
 method based on quadratic interpolation, no derivatives e04abf
 method based on cubic interpolation e04bbf
 bound-constrained,
 first order active-set method (nonlinear conjugate gradient) e04kff
 quasi-Newton algorithm, no derivatives e04jyf
 quasi-Newton algorithm, first derivatives e04kyf
 modified Newton algorithm, first derivatives e04kdf
 modified Newton algorithm, first derivatives, easy-to-use e04kzf
 modified Newton algorithm, first and second derivatives e04lbf
 modified Newton algorithm, first and second derivatives, easy-to-use e04lyf
 Linear least squares, linear regression, data fitting,
 constrained,
 bound-constrained least squares problem e04pcf
 linearly-constrained active-set method e04ncf
 Nonlinear least squares, data fitting,
 unconstrained,
 combined Gauss–Newton and modified Newton algorithm,
 no derivatives e04fcf
 no derivatives, easy-to-use e04fyf
 first derivatives e04gdf
 first derivatives, easy-to-use e04gzf
 first and second derivatives e04hef
 first and second derivatives, easy-to-use e04hyf
 combined Gauss–Newton and quasi-Newton algorithm,
 first derivatives e04gbf
 first derivatives, easy-to-use e04gyf
 covariance matrix for nonlinear least squares problem (unconstrained) e04ycf
 bound constrained,
 model-based derivative-free algorithm,
 direct communication e04fff
 reverse communication e04fgf
 trust region algorithm,
 first derivatives, optionally second derivatives e04ggf
 generic, including nonlinearly constrained,
 nonlinear constraints active-set sequential quadratic programming (SQP) e04usf
 NAG optimization modelling suite,
 initialization of a handle for the suite,
 initialization as an empty problem e04raf
 read a problem from a file to a handle e04saf
 problem definition,
 define a linear objective function e04ref
 define a linear or a quadratic objective function e04rff
 define a nonlinear least square objective function e04rmf
 define a nonlinear objective function e04rgf
 define a second-order cone e04rbf
 define bounds of variables e04rhf
 define a block of linear constraints e04rjf
 define a block of nonlinear constraints e04rkf
 define a structure of Hessian of the objective, constraints or the Lagrangian e04rlf
 add one or more linear matrix inequality constraints e04rnf
 define bilinear matrix terms e04rpf
 factor of quadratic coefficient matrix e04rtf
 set variable properties (e.g., integrality) e04rcf
 problem editing,
 define new variables e04taf
 disable (temporarily remove) components of the model e04tcf
 enable (bring back) previously disabled components of the model e04tbf
 modify a single coefficient in a linear constraint e04tjf
 modify a single coefficient in the linear objective function e04tef
 modify bounds of an existing constraint or variable e04tdf
 solvers,
 interior point method (IPM) for linear programming (LP) e04mtf
 first order active-set method (nonlinear conjugate gradient) e04kff
 interior point method (IPM) for nonlinear programming (NLP) e04stf
 generalized augmented Lagrangian method for SDP and SDP with bilinear matrix inequalities (BMI-SDP) e04svf
 interior point method (IPM) for Second-order Cone programming (SOCP) e04ptf
 derivative-free optimisation (DFO) for nonlinear least squares problems,
 direct communication e04fff
 reverse communication e04fgf
 trust region optimisation for nonlinear least squares problems (BXNL) e04ggf
 model-based method for bound-constrained optimization,
 direct communication e04jdf
 reverse communication e04jef
 deallocation,
 destroy the problem handle e04rzf
 service routines,
 print information about a problem handle e04ryf
 set/get information in a problem handle e04rxf
 set/get integer information in a problem handle e04rwf
 supply optional parameter values from a character string e04zmf
 get the setting of option e04znf
 supply optional parameter values from external file e04zpf
 Service routines,
 input and output (I/O),
 read MPS data file defining LP, QP, MILP or MIQP problem e04mxf
 write MPS data file defining LP, QP, MILP or MIQP problem e04mwf
 read sparse SPDA data files for linear SDP problems e04rdf
 read a problem from a file to a handle e04saf
 derivative check and approximation,
 check user's routine for calculating first derivatives of function e04hcf
 check user's routine for calculating second derivatives of function e04hdf
 check user's routine for calculating Jacobian of first derivatives e04yaf
 check user's routine for calculating Hessian of a sum of squares e04ybf
 estimate (using numerical differentiation) gradient and/or Hessian of a function e04xaf
 determine the pattern of nonzeros in the Jacobian matrix for e04vhf e04vjf
 covariance matrix for nonlinear least squares problem (unconstrained) e04ycf
 option setting routines,
 NAG optimization modelling suite,
 supply optional parameter values from a character string e04zmf
 get the setting of option e04znf
 supply optional parameter values from external file e04zpf
 e04mff/​e04mfa,
 initialization routine for e04mfa e04wbf
 supply optional parameter values from external file e04mgf
 supply optional parameter values from a character string e04mhf
 e04ncf/​e04nca,
 initialization routine for e04nca e04wbf
 supply optional parameter values from external file e04ndf
 supply optional parameter values from a character string e04nef
 e04nff/​e04nfa,
 initialization routine for e04nfa e04wbf
 supply optional parameter values from external file e04ngf
 supply optional parameter values from a character string e04nhf
 e04nkf/​e04nka,
 initialization routine for e04nka e04wbf
 supply optional parameter values from external file e04nlf
 supply optional parameter values from a character string e04nmf
 e04nqf,
 initialization routine e04npf
 supply optional parameter values from external file e04nrf
 set a single option from a character string e04nsf
 set a single option from an integer argument e04ntf
 set a single option from a real argument e04nuf
 get the setting of an integer valued option e04nxf
 get the setting of a real valued option e04nyf
 e04ucf/​e04uca and e04uff/​e04ufa,
 initialization routine for e04uca and e04ufa e04wbf
 supply optional parameter values from external file e04udf
 supply optional parameter values from a character string e04uef
 e04ugf/​e04uga,
 initialization routine for e04uga e04wbf
 supply optional parameter values from external file e04uhf
 supply optional parameter values from a character string e04ujf
 e04usf/​e04usa,
 initialization routine for e04usa e04wbf
 supply optional parameter values from external file e04uqf
 supply optional parameter values from a character string e04urf
 e04vhf,
 initialization routine e04vgf
 supply optional parameter values from external file e04vkf
 set a single option from a character string e04vlf
 set a single option from an integer argument e04vmf
 set a single option from a real argument e04vnf
 get the setting of an integer valued option e04vrf
 get the setting of a real valued option e04vsf
 e04wdf,
 initialization routine e04wcf
 supply optional parameter values from external file e04wef
 set a single option from a character string e04wff
 set a single option from an integer argument e04wgf
 set a single option from a real argument e04whf
 get the setting of an integer valued option e04wkf
 get the setting of a real valued option e04wlf

## 6Auxiliary Routines Associated with Library Routine Arguments

 e04cbk nagf_opt_uncon_simplex_dummy_monitSee the description of the argument monit in e04cbf. e04fcv nagf_opt_lsq_uncon_quasi_deriv_comp_lsqlin_funSee the description of the argument lsqlin in e04gbf. e04fdz nagf_opt_lsq_dummy_lsqmonSee the description of the argument lsqmon in e04fcf, e04gdf and e04hef. e04ffu nagf_opt_bobyqa_ls_dummy_monitSee the description of the argument monit in e04fff. e04ggu nagf_opt_bxnl_dummy_lsqhesSee the description of the argument lsqhes in e04ggf. e04ggv nagf_opt_bxnl_dummy_lsqhprdSee the description of the argument lsqhprd in e04ggf. e04hev nagf_opt_lsq_uncon_quasi_deriv_comp_lsqlin_derivSee the description of the argument lsqlin in e04gbf. e04jcp nagf_opt_bounds_bobyqa_func_dummy_monfunSee the description of the argument monfun in e04jcf. e04jdu nagf_opt_dummy_monitSee the description of the argument monit in e04jdf. e04jdv nagf_opt_dfno_dummy_objfunSee the description of the argument objfun in e04jdf. e04kfu nagf_opt_bounds_dummy_monitSee the description of the argument monit in e04kff. e04kfv nagf_opt_bounds_dummy_objfunSee the description of the argument objfun in e04kff. e04kfw nagf_opt_bounds_dummy_objgrdSee the description of the argument objgrd in e04kff. e04mtu nagf_opt_lp_imp_dummy_monitSee the description of the argument monit in e04mtf. e54nfu nagf_opt_qp_dense_sample_qphessSee the description of the argument qphess in e04nff/​e04nfa and h02cbf. e04nfu nagf_opt_qp_dense_sample_qphess_oldSee the description of the argument qphess in e04nff/​e04nfa and h02cbf. e54nku nagf_opt_qpconvex1_sparse_dummy_qphxSee the description of the argument qphx in e04nkf/​e04nka and h02cef. e04nku nagf_opt_qpconvex1_sparse_dummy_qphx_oldSee the description of the argument qphx in e04nkf/​e04nka and h02cef. e04nsh nagf_opt_qpconvex2_sparse_dummy_qphxSee the description of the argument qphx in e04nqf. e04ptu nagf_opt_socp_dummy_monitSee the description of the argument monit in e04ptf. e04stu nagf_opt_ipopt_dummy_monitSee the description of the argument monit in e04stf. e04stv nagf_opt_ipopt_dummy_objfunSee the description of the argument objfun in e04stf. e04stw nagf_opt_ipopt_dummy_objgrdSee the description of the argument objgrd in e04stf. e04stx nagf_opt_ipopt_dummy_confunSee the description of the argument confun in e04stf. e04sty nagf_opt_ipopt_dummy_congrdSee the description of the argument congrd in e04stf. e04stz nagf_opt_ipopt_dummy_hessSee the description of the argument hess in e04stf. e04udm nagf_opt_nlp1_dummy_confunSee the description of the argument confun in e04ucf/​e04uca and e04usf/​e04usa. e04ugm nagf_opt_nlp1_sparse_dummy_confunSee the description of the argument confun in e04ugf/​e04uga. e04ugn nagf_opt_nlp1_sparse_dummy_objfunSee the description of the argument objfun in e04ugf/​e04uga. e04wdp nagf_opt_nlp2_dummy_confunSee the description of the argument confun in e04wdf.

## 7 Withdrawn or Deprecated Routines

The following lists all those routines that have been withdrawn since Mark 23 of the Library or are in the Library, but deprecated.
Routine Status Replacement Routine(s)
e04ccf Withdrawn at Mark 24 e04cbf
e04dgf Deprecated e04kff
e04djf Deprecated No replacement routine required
e04dkf Deprecated No replacement routine required
e04jcf Deprecated e04jdf and e04jef
e04jcp Deprecated
e04mzf Deprecated e04mxf
e04vdm Withdrawn at Mark 24
e04zcf Withdrawn at Mark 24 No longer required
Alizadeh F and Goldfarb D (2003) Second-order cone programming Mathematical programming 95(1) 3–51
Bard Y (1974) Nonlinear Parameter Estimation Academic Press
Chvátal V (1983) Linear Programming W.H. Freeman
Dantzig G B (1963) Linear Programming and Extensions Princeton University Press
Fletcher R (1987) Practical Methods of Optimization (2nd Edition) Wiley
Gill P E and Murray W (ed.) (1974) Numerical Methods for Constrained Optimization Academic Press
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Lobo M S, Vandenberghe L, Boyd S and Levret H (1998) Applications of second-order cone programming Linear Algebra and its Applications 284(1-3) 193–228
Murray W (ed.) (1972) Numerical Methods for Unconstrained Optimization Academic Press
Nocedal J and Wright S J (2006) Numerical Optimization (2nd Edition) Springer Series in Operations Research, Springer, New York
Wolberg J R (1967) Prediction Analysis Van Nostrand