PDF version (NAG web site
, 64-bit version, 64-bit version)
NAG Toolbox: nag_opt_lsq_uncon_mod_deriv2_comp (e04he)
Purpose
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) is a comprehensive modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of nonlinear functions in variables . First and second derivatives are required.
The function is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
Syntax
[
x,
fsumsq,
fvec,
fjac,
s,
v,
niter,
nf,
user,
ifail] = e04he(
m,
lsqfun,
lsqhes,
lsqmon,
xtol,
x, 'n',
n, 'iprint',
iprint, 'maxcal',
maxcal, 'eta',
eta, 'stepmx',
stepmx, 'user',
user)
[
x,
fsumsq,
fvec,
fjac,
s,
v,
niter,
nf,
user,
ifail] = nag_opt_lsq_uncon_mod_deriv2_comp(
m,
lsqfun,
lsqhes,
lsqmon,
xtol,
x, 'n',
n, 'iprint',
iprint, 'maxcal',
maxcal, 'eta',
eta, 'stepmx',
stepmx, 'user',
user)
Note: the interface to this routine has changed since earlier releases of the toolbox:
At Mark 24: |
maxcal was made optional; w and iw were removed from the interface; user was added to the interface |
At Mark 22: |
liw and lw were removed from the interface |
Description
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) is essentially identical to the function LSQSDN in the NPL Algorithms Library. It is applicable to problems of the form:
where
and
. (The functions
are often referred to as ‘residuals’.)
You must supply functions to calculate the values of the and their first derivatives and second derivatives at any point .
From a starting point
supplied by you, the function generates a sequence of points
, which is intended to converge to a local minimum of
. The sequence of points is given by
where the vector
is a direction of search, and
is chosen such that
is approximately a minimum with respect to
.
The vector used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then is the Gauss–Newton direction; otherwise the second derivatives of the are taken into account.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.
References
Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992
Parameters
Compulsory Input Parameters
- 1:
– int64int32nag_int scalar
-
The number of residuals, , and the number of variables, .
Constraint:
.
- 2:
– function handle or string containing name of m-file
-
lsqfun must calculate the vector of values
and Jacobian matrix of first derivatives
at any point
. (However, if you do not wish to calculate the residuals or first derivatives at a particular
, there is the option of setting a argument to cause
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) to terminate immediately.)
[iflag, fvec, fjac, user] = lsqfun(iflag, m, n, xc, ldfjac, user)
Input Parameters
- 1:
– int64int32nag_int scalar
-
To
lsqfun,
iflag will be set to
.
- 2:
– int64int32nag_int scalar
-
, the numbers of residuals.
- 3:
– int64int32nag_int scalar
-
, the numbers of variables.
- 4:
– double array
-
The point at which the values of the and the are required.
- 5:
– int64int32nag_int scalar
-
The first dimension of the array
fjac.
- 6:
– Any MATLAB object
lsqfun is called from
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) with the object supplied to
nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
Output Parameters
- 1:
– int64int32nag_int scalar
-
If it is not possible to evaluate the
or their first derivatives at the point given in
xc (or if it wished to stop the calculations for any other reason), you should reset
iflag to some negative number and return control to
nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) will then terminate immediately, with
ifail set to your setting of
iflag.
- 2:
– double array
-
Unless
iflag is reset to a negative number,
must contain the value of
at the point
, for
.
- 3:
– double array
-
Unless
iflag is reset to a negative number,
must contain the value of
at the point
, for
and
.
- 4:
– Any MATLAB object
Note: lsqfun should be tested separately before being used in conjunction with
nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
- 3:
– function handle or string containing name of m-file
-
lsqhes must calculate the elements of the symmetric matrix
at any point
, where
is the Hessian matrix of
. (As with
lsqfun, there is the option of causing
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) to terminate immediately.)
[iflag, b, user] = lsqhes(iflag, m, n, fvec, xc, lb, user)
Input Parameters
- 1:
– int64int32nag_int scalar
-
Is set to a non-negative number.
- 2:
– int64int32nag_int scalar
-
, the numbers of residuals.
- 3:
– int64int32nag_int scalar
-
, the numbers of variables.
- 4:
– double array
-
The value of the residual
at the point
, for
, so that the values of the
can be used in the calculation of the elements of
b.
- 5:
– double array
-
The point
at which the elements of
b are to be evaluated.
- 6:
– int64int32nag_int scalar
-
The length of the array
b.
- 7:
– Any MATLAB object
lsqhes is called from
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) with the object supplied to
nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
Output Parameters
- 1:
– int64int32nag_int scalar
-
If
lsqhes resets
iflag to some negative number,
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) will terminate immediately, with
ifail set to your setting of
iflag.
- 2:
– double array
-
Unless
iflag is reset to a negative number,
b must contain the lower triangle of the matrix
, evaluated at the point
, stored by rows. (The upper triangle is not required because the matrix is symmetric.) More precisely,
must contain
evaluated at the point
, for
and
.
- 3:
– Any MATLAB object
Note: lsqhes should be tested separately before being used in conjunction with
nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
- 4:
– function handle or string containing name of m-file
-
If
, you must supply
lsqmon which is suitable for monitoring the minimization process.
lsqmon must not change the values of any of its arguments.
If
, the string
nag_opt_lsq_dummy_lsqmon (e04fdz) can be used as
lsqmon.
[user] = lsqmon(m, n, xc, fvec, fjac, ldfjac, s, igrade, niter, nf, user)
Input Parameters
- 1:
– int64int32nag_int scalar
-
, the numbers of residuals.
- 2:
– int64int32nag_int scalar
-
, the numbers of variables.
- 3:
– double array
-
The coordinates of the current point .
- 4:
– double array
-
The values of the residuals at the current point .
- 5:
– double array
-
contains the value of at the current point , for and .
- 6:
– int64int32nag_int scalar
-
The first dimension of the array
fjac.
- 7:
– double array
-
The singular values of the current Jacobian matrix. Thus
s may be useful as information about the structure of your problem. (If
,
lsqmon is called at the initial point before the singular values have been calculated, so the elements of
s are set to zero for the first call of
lsqmon.)
- 8:
– int64int32nag_int scalar
-
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see
Gill and Murray (1978)). This estimate is called the grade of the Jacobian matrix, and
igrade gives its current value.
- 9:
– int64int32nag_int scalar
-
The number of iterations which have been performed in nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
- 10:
– int64int32nag_int scalar
-
The number of times that
lsqfun has been called so far. Thus
nf gives the number of evaluations of the residuals and the Jacobian matrix.
- 11:
– Any MATLAB object
lsqmon is called from
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) with the object supplied to
nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
Output Parameters
- 1:
– Any MATLAB object
Note: you should normally print the sum of squares of residuals, so as to be able to examine the sequence of values of
mentioned in
Accuracy. It is usually helpful to also print
xc, the gradient of the sum of squares,
niter and
nf.
- 5:
– double scalar
-
The accuracy in
to which the solution is required.
If
is the true value of
at the minimum, then
, the estimated position before a normal exit, is such that
where
. For example, if the elements of
are not much larger than
in modulus and if
, then
is usually accurate to about five decimal places. (For further details see
Accuracy.)
If
and the variables are scaled roughly as described in
Further Comments and
is the
machine precision, then a setting of order
will usually be appropriate. If
xtol is set to
or some positive value less than
,
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) will use
instead of
xtol, since
is probably the smallest reasonable setting.
Constraint:
.
- 6:
– double array
-
must be set to a guess at the th component of the position of the minimum, for .
Optional Input Parameters
- 1:
– int64int32nag_int scalar
-
Default:
For
n, the dimension of the array
x.
The number of residuals, , and the number of variables, .
Constraint:
.
- 2:
– int64int32nag_int scalar
Default:
Specifies the frequency with which
lsqmon is to be called.
- lsqmon is called once every iprint iterations and just before exit from nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
- lsqmon is just called at the final point.
- lsqmon is not called at all.
iprint should normally be set to a small positive number.
- 3:
– int64int32nag_int scalar
Default:
This argument is present so as to enable you to limit the number of times that
lsqfun is called by
nag_opt_lsq_uncon_mod_deriv2_comp (e04he). There will be an error exit (see
Error Indicators and Warnings) after
maxcal calls of
lsqfun.
Constraint:
.
- 4:
– double scalar
Suggested value:
( if ).
Default:
- if , ;
- otherwise .
Every iteration of
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) involves a linear minimization (i.e., minimization of
with respect to
).
eta must lie in the range
, and specifies how accurately these linear minimizations are to be performed. The minimum with respect to
will be located more accurately for small values of
eta (say,
) than for large values (say,
).
Although accurate linear minimizations will generally reduce the number of iterations performed by
nag_opt_lsq_uncon_mod_deriv2_comp (e04he), they will increase the number of calls of
lsqfun made each iteration. On balance it is usually more efficient to perform a low accuracy minimization.
Constraint:
.
- 5:
– double scalar
Default:
An estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.)
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) will ensure that, for each iteration
where
is the iteration number. Thus, if the problem has more than one solution,
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of
entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of
. However, an underestimate of
stepmx can lead to inefficiency.
Constraint:
.
- 6:
– Any MATLAB object
user is not used by
nag_opt_lsq_uncon_mod_deriv2_comp (e04he), but is passed to
lsqfun,
lsqhes and
lsqmon. Note that for large objects it may be more efficient to use a global variable which is accessible from the m-files than to use
user.
Output Parameters
- 1:
– double array
-
The final point . Thus, if on exit, is the th component of the estimated position of the minimum.
- 2:
– double scalar
-
The value of
, the sum of squares of the residuals
, at the final point given in
x.
- 3:
– double array
-
The value of the residual
at the final point given in
x, for
.
- 4:
– double array
-
The value of the first derivative
evaluated at the final point given in
x, for
and
.
- 5:
– double array
-
The singular values of the Jacobian matrix at the final point. Thus
s may be useful as information about the structure of your problem.
- 6:
– double array
-
The matrix
associated with the singular value decomposition
of the Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of
.
- 7:
– int64int32nag_int scalar
-
The number of iterations which have been performed in nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
- 8:
– int64int32nag_int scalar
-
The number of times that the residuals and Jacobian matrix have been evaluated (i.e., number of calls of
lsqfun).
- 9:
– Any MATLAB object
- 10:
– int64int32nag_int scalar
unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_opt_lsq_uncon_mod_deriv2_comp (e04he) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
- W
-
A negative value of
ifail indicates an exit from
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) because you have set
iflag negative in
lsqfun or
lsqhes. The value of
ifail will be the same as your setting of
iflag.
-
-
On entry, | , |
or | , |
or | , |
or | , |
or | , |
or | , |
or | , |
or | , |
or | , |
or | , |
or | when , |
or | when . |
When this exit occurs, no values will have been assigned to
fsumsq, or to the elements of
fvec,
fjac,
s or
v.
-
-
There have been
maxcal calls of
lsqfun. If steady reductions in the sum of squares,
, were monitored up to the point where this exit occurred, then the exit probably occurred simply because
maxcal was set too small, so the calculations should be restarted from the final point held in
x. This exit may also indicate that
has no minimum.
- W
-
The conditions for a minimum have not all been satisfied, but a lower point could not be found. This could be because
xtol has been set so small that rounding errors in the evaluation of the residuals and derivatives make attainment of the convergence conditions impossible.
-
-
The method for computing the singular value decomposition of the Jacobian matrix has failed to converge in a reasonable number of sub-iterations. It may be worth applying nag_opt_lsq_uncon_mod_deriv2_comp (e04he) again starting with an initial approximation which is not too close to the point at which the failure occurred.
-
An unexpected error has been triggered by this routine. Please
contact
NAG.
-
Your licence key may have expired or may not have been installed correctly.
-
Dynamic memory allocation failed.
The values
,
and
may also be caused by mistakes in
lsqfun or
lsqhes, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
Accuracy
A successful exit (
) is made from
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) when the matrix of second derivatives of
is positive definite, and when (B1, B2 and B3) or B4 or B5 hold, where
and where
and
are as defined in
Arguments, and
and
are the values of
and its vector of first derivatives at
.
If
, then the vector in
x on exit,
, is almost certainly an estimate of
, the position of the minimum to the accuracy specified by
xtol.
If
, then
may still be a good estimate of
, but to verify this you should make the following checks. If
(a) |
the sequence converges to at a superlinear or a fast linear rate, and |
(b) |
, where denotes transpose, then it is almost certain that is a close approximation to the minimum. |
When
(b) is true, then usually
is a close approximation to
. The values of
can be calculated in
lsqmon, and the vector
can be calculated from the contents of
fvec and
fjac on exit from
nag_opt_lsq_uncon_mod_deriv2_comp (e04he).
Further suggestions about confirmation of a computed solution are given in the
E04 Chapter Introduction.
Further Comments
The number of iterations required depends on the number of variables, the number of residuals, the behaviour of
, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of
nag_opt_lsq_uncon_mod_deriv2_comp (e04he) varies, but for
is approximately
. In addition, each iteration makes at least one call of
lsqfun and some iterations may call
lsqhes. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in
lsqfun (and, to a lesser extent, in
lsqhes).
Ideally, the problem should be scaled so that, at the solution, and the corresponding values of the are each in the range , and so that at points one unit away from the solution, differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_uncon_mod_deriv2_comp (e04he) will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to
nag_opt_lsq_uncon_covariance (e04yc), using information returned in the arrays
s and
v. See
nag_opt_lsq_uncon_covariance (e04yc) for further details.
Example
This example finds least squares estimates of
and
in the model
using the
sets of data given in the following table.
Before calling
nag_opt_lsq_uncon_mod_deriv2_comp (e04he), the program calls
nag_opt_lsq_check_deriv (e04ya) and
nag_opt_lsq_check_hessian (e04yb) to check
lsqfun and
lsqhes. It uses
as the initial guess at the position of the minimum.
Open in the MATLAB editor:
e04he_example
function e04he_example
fprintf('e04he example results\n\n');
global y t;
m = int64(15);
y = [ 0.14, 0.18, 0.22, 0.25, 0.29,...
0.32, 0.35, 0.39, 0.37, 0.58,...
0.73, 0.96, 1.34, 2.10, 4.39];
t = [ 1.0 15.0 1.0;
2.0 14.0 2.0;
3.0 13.0 3.0;
4.0 12.0 4.0;
5.0 11.0 5.0;
6.0 10.0 6.0;
7.0 9.0 7.0;
8.0 8.0 8.0;
9.0 7.0 7.0;
10.0 6.0 6.0;
11.0 5.0 5.0;
12.0 4.0 4.0;
13.0 3.0 3.0;
14.0 2.0 2.0;
15.0 1.0 1.0];
n = 3;
x = [0.5; 1; 1.5];
xtol = sqrt(x02aj());
[x, fsumsq, fvec, fjac, s, v, niter, nf, user, ifail] = ...
e04he(...
m, @lsqfun, @lsqhes, @lsqmon, xtol, x);
fprintf('Best fit model parameters are:\n');
for i = 1:n
fprintf(' x_%d = %10.3f\n',i,x(i));
end
fprintf('\nResiduals for observed data:\n');
fprintf(' %8.4f %8.4f %8.4f %8.4f %8.4f\n',fvec);
fprintf('\nSum of squares of residuals = %7.4f\n',fsumsq);
fprintf('Number of iterations = %7d\n',niter);
fprintf('Number of function evaluations = %7d\n',nf);
function [iflag, fvecc, fjacc, user] = lsqfun(iflag, m, n, xc, ljc, user)
global y t;
fvecc = zeros(m, 1);
fjacc = zeros(ljc, n);
for i = 1:m
denom = xc(2)*t(i,2) + xc(3)*t(i,3);
fvecc(i) = xc(1) + t(i,1)/denom - y(i);
if (iflag ~= 0)
fjacc(i,1) = 1;
dummy = -1/(denom*denom);
fjacc(i,2) = t(i,1)*t(i,2)*dummy;
fjacc(i,3) = t(i,1)*t(i,3)*dummy;
end
end
function [iflag, b, user] = lsqhes(iflag, m, n, fvecc, xc, lb, user)
global y t;
b = zeros(lb, 1);
sum22 = 0;
sum32 = 0;
sum33 = 0;
for i = 1:double(m)
dummy = 2*t(i,1)/(xc(2)*t(i,2)+xc(3)*t(i,3))^3;
sum22 = sum22 + fvecc(i)*dummy*t(i,2)^2;
sum32 = sum32 + fvecc(i)*dummy*t(i,2)*t(i,3);
sum33 = sum33 + fvecc(i)*dummy*t(i,3)^2;
end
b(3) = sum22;
b(5) = sum32;
b(6) = sum33;
function [user] = lsqmon(m, n, xc, fvecc, fjacc, ljc, ...
s, igrade, niter, nf, user)
e04he example results
Best fit model parameters are:
x_1 = 0.082
x_2 = 1.133
x_3 = 2.344
Residuals for observed data:
-0.0059 -0.0003 0.0003 0.0065 -0.0008
-0.0013 -0.0045 -0.0200 0.0822 -0.0182
-0.0148 -0.0147 -0.0112 -0.0042 0.0068
Sum of squares of residuals = 0.0082
Number of iterations = 5
Number of function evaluations = 6
PDF version (NAG web site
, 64-bit version, 64-bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015