PDF version (NAG web site
, 64bit version, 64bit version)
NAG Toolbox: nag_opt_lsq_uncon_mod_deriv_easy (e04gz)
Purpose
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) is an easytouse modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $\left(m\ge n\right)$. First derivatives are required.
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
Syntax
Description
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) is similar to the function LSFDN2 in the NPL Algorithms Library. It is applicable to problems of the form
where
$x={\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)}^{\mathrm{T}}$ and
$m\ge n$. (The functions
${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.)
You must supply a function to evaluate the residuals and their first derivatives at any point $x$.
Before attempting to minimize the sum of squares, the algorithm checks the function for consistency. Then, from a starting point supplied by you, a sequence of points is generated which is intended to converge to a local minimum of the sum of squares. These points are generated using estimates of the curvature of $F\left(x\right)$.
References
Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992
Parameters
Compulsory Input Parameters
 1:
$\mathrm{m}$ – int64int32nag_int scalar

The number $m$ of residuals, ${f}_{i}\left(x\right)$, and the number $n$ of variables, ${x}_{j}$.
Constraint:
$1\le {\mathbf{n}}\le {\mathbf{m}}$.
 2:
$\mathrm{lsfun2}$ – function handle or string containing name of mfile

You must supply this function to calculate the vector of values ${f}_{i}\left(x\right)$ and the Jacobian matrix of first derivatives $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point $x$. It should be tested separately before being used in conjunction with nag_opt_lsq_uncon_mod_deriv_easy (e04gz).
[fvec, fjac, user] = lsfun2(m, n, xc, ldfjac, user)
Input Parameters
 1:
$\mathrm{m}$ – int64int32nag_int scalar

$m$, the numbers of residuals.
 2:
$\mathrm{n}$ – int64int32nag_int scalar

$n$, the numbers of variables.
 3:
$\mathrm{xc}\left({\mathbf{n}}\right)$ – double array

The point $x$ at which the values of the ${f}_{i}$ and the $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ are required.
 4:
$\mathrm{ldfjac}$ – int64int32nag_int scalar

The first dimension of the array
fjac.
 5:
$\mathrm{user}$ – Any MATLAB object
lsfun2 is called from
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) with the object supplied to
nag_opt_lsq_uncon_mod_deriv_easy (e04gz).
Output Parameters
 1:
$\mathrm{fvec}\left({\mathbf{m}}\right)$ – double array

${\mathbf{fvec}}\left(i\right)$ must be set to the value of
${f}_{\mathit{i}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$.
 2:
$\mathrm{fjac}\left({\mathbf{ldfjac}},{\mathbf{n}}\right)$ – double array

${\mathbf{fjac}}\left(\mathit{i},\mathit{j}\right)$ must be set to the value of $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
 3:
$\mathrm{user}$ – Any MATLAB object
 3:
$\mathrm{x}\left({\mathbf{n}}\right)$ – double array

${\mathbf{x}}\left(\mathit{j}\right)$ must be set to a guess at the
$\mathit{j}$th component of the position of the minimum, for
$\mathit{j}=1,2,\dots ,n$. The function checks the first derivatives calculated by
lsfun2 at the starting point and so is more likely to detect any error in your functions if the initial
${\mathbf{x}}\left(j\right)$ are nonzero and mutually distinct.
Optional Input Parameters
 1:
$\mathrm{n}$ – int64int32nag_int scalar

Default:
the dimension of the array
x.
The number $m$ of residuals, ${f}_{i}\left(x\right)$, and the number $n$ of variables, ${x}_{j}$.
Constraint:
$1\le {\mathbf{n}}\le {\mathbf{m}}$.
 2:
$\mathrm{user}$ – Any MATLAB object
user is not used by
nag_opt_lsq_uncon_mod_deriv_easy (e04gz), but is passed to
lsfun2. Note that for large objects it may be more efficient to use a global variable which is accessible from the mfiles than to use
user.
Output Parameters
 1:
$\mathrm{x}\left({\mathbf{n}}\right)$ – double array

The lowest point found during the calculations. Thus, if ${\mathbf{ifail}}={\mathbf{0}}$ on exit, ${\mathbf{x}}\left(j\right)$ is the $j$th component of the position of the minimum.
 2:
$\mathrm{fsumsq}$ – double scalar

The value of the sum of squares,
$F\left(x\right)$, corresponding to the final point stored in
x.
 3:
$\mathrm{user}$ – Any MATLAB object
 4:
$\mathrm{ifail}$ – int64int32nag_int scalar
${\mathbf{ifail}}={\mathbf{0}}$ unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_opt_lsq_uncon_mod_deriv_easy (e04gz) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
 ${\mathbf{ifail}}=1$

On entry,  ${\mathbf{n}}<1$, 
or  ${\mathbf{m}}<{\mathbf{n}}$, 
or  $\mathit{lw}<8\times {\mathbf{n}}+2\times {\mathbf{n}}\times {\mathbf{n}}+2\times {\mathbf{m}}\times {\mathbf{n}}+3\times {\mathbf{m}}$, when ${\mathbf{n}}>1$, 
or  $\mathit{lw}<11+5\times {\mathbf{m}}$, when ${\mathbf{n}}=1$. 
 ${\mathbf{ifail}}=2$

There have been
$50\times n$ calls of
lsfun2, yet the algorithm does not seem to have converged. This may be due to an awkward function or to a poor starting point, so it is worth restarting
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) from the final point held in
x.
 W ${\mathbf{ifail}}=3$
The final point does not satisfy the conditions for acceptance as a minimum, but no lower point could be found.
 ${\mathbf{ifail}}=4$
An auxiliary function has been unable to complete a singular value decomposition in a reasonable number of subiterations.
 W ${\mathbf{ifail}}=5$
 W ${\mathbf{ifail}}=6$
 W ${\mathbf{ifail}}=7$
 W ${\mathbf{ifail}}=8$

There is some doubt about whether the point
x$x$ found by
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) is a minimum of
$F\left(x\right)$. The degree of confidence in the result decreases as
ifail increases. Thus, when
${\mathbf{ifail}}={\mathbf{5}}$, it is probable that the final
$x$ gives a good estimate of the position of a minimum, but when
${\mathbf{ifail}}={\mathbf{8}}$ it is very unlikely that the function has found a minimum.
 ${\mathbf{ifail}}=9$
It is very likely that you have made an error in forming the derivatives
$\frac{\partial {f}_{i}}{\partial {x}_{j}}$ in
lsfun2.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
If you are not satisfied with the result (e.g., because
ifail lies between
$3$ and
$8$), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. Repeated failure may indicate some defect in the formulation of the problem.
Accuracy
If the problem is reasonably well scaled and a successful exit is made, then, for a computer with a mantissa of $t$ decimals, one would expect to get about $t/21$ decimals accuracy in the components of $x$ and between $t1$ (if $F\left(x\right)$ is of order $1$ at the minimum) and $2t2$ (if $F\left(x\right)$ is close to zero at the minimum) decimals accuracy in $F\left(x\right)$.
Further Comments
The number of iterations required depends on the number of variables, the number of residuals and their behaviour, and the distance of the starting point from the solution. The number of multiplications performed per iteration of
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) varies, but for
$m\gg n$ is approximately
$n\times {m}^{2}+\mathit{O}\left({n}^{3}\right)$. In addition, each iteration makes at least one call of
lsfun2. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in
lsfun2.
Ideally, the problem should be scaled so that the minimum value of the sum of squares is in the range $\left(0,+1\right)$ and so that at points a unit distance away from the solution the sum of squares is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_uncon_mod_deriv_easy (e04gz) will take less computer time.
When the sum of squares represents the goodnessoffit of a nonlinear model to observed data, elements of the variancecovariance matrix of the estimated regression coefficients can be computed by a subsequent call to
nag_opt_lsq_uncon_covariance (e04yc), using information returned in segments of the workspace array
w. See
nag_opt_lsq_uncon_covariance (e04yc) for further details.
Example
This example finds least squares estimates of
${x}_{1}$,
${x}_{2}$ and
${x}_{3}$ in the model
using the
$15$ sets of data given in the following table.
The program uses
$\left(0.5,1.0,1.5\right)$ as the initial guess at the position of the minimum.
Open in the MATLAB editor:
e04gz_example
function e04gz_example
fprintf('e04gz example results\n\n');
global y t;
m = int64(15);
y = [ 0.14, 0.18, 0.22, 0.25, 0.29,...
0.32, 0.35, 0.39, 0.37, 0.58,...
0.73, 0.96, 1.34, 2.10, 4.39];
t = [ 1.0, 15.0, 1.0;
2.0, 14.0, 2.0;
3.0, 13.0, 3.0;
4.0, 12.0, 4.0;
5.0, 11.0, 5.0;
6.0, 10.0, 6.0;
7.0, 9.0, 7.0;
8.0, 8.0, 8.0;
9.0, 7.0, 7.0;
10.0, 6.0, 6.0;
11.0, 5.0, 5.0;
12.0, 4.0, 4.0;
13.0, 3.0, 3.0;
14.0, 2.0, 2.0;
15.0, 1.0, 1.0];
n = 3;
x = [0.5; 1; 1.5];
[x, fsumsq, user, ifail] = e04gz(m, @lsfun2, x);
fprintf('Best fit model parameters are:\n');
for i = 1:n
fprintf(' x_%d = %10.3f\n',i,x(i));
end
fprintf('\nSum of squares of residuals = %7.4f\n',fsumsq);
function [fvecc, fjacc, user] = lsfun2(m, n, xc, ljc, user)
global y t;
fvecc = zeros(m, 1);
fjacc = zeros(ljc, n);
for i = 1:m
denom = xc(2)*t(i,2) + xc(3)*t(i,3);
fvecc(i) = xc(1) + t(i,1)/denom  y(i);
fjacc(i,1) = 1;
dummy = 1/(denom*denom);
fjacc(i,2) = t(i,1)*t(i,2)*dummy;
fjacc(i,3) = t(i,1)*t(i,3)*dummy;
end
e04gz example results
Best fit model parameters are:
x_1 = 0.082
x_2 = 1.133
x_3 = 2.344
Sum of squares of residuals = 0.0082
PDF version (NAG web site
, 64bit version, 64bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015