PDF version (NAG web site
, 64-bit version, 64-bit version)
NAG Toolbox: nag_opt_lsq_uncon_mod_deriv_easy (e04gz)
Purpose
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) is an easy-to-use modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of nonlinear functions in variables . First derivatives are required.
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
Syntax
Description
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) is similar to the function LSFDN2 in the NPL Algorithms Library. It is applicable to problems of the form
where
and
. (The functions
are often referred to as ‘residuals’.)
You must supply a function to evaluate the residuals and their first derivatives at any point .
Before attempting to minimize the sum of squares, the algorithm checks the function for consistency. Then, from a starting point supplied by you, a sequence of points is generated which is intended to converge to a local minimum of the sum of squares. These points are generated using estimates of the curvature of .
References
Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992
Parameters
Compulsory Input Parameters
- 1:
– int64int32nag_int scalar
-
The number of residuals, , and the number of variables, .
Constraint:
.
- 2:
– function handle or string containing name of m-file
-
You must supply this function to calculate the vector of values and the Jacobian matrix of first derivatives at any point . It should be tested separately before being used in conjunction with nag_opt_lsq_uncon_mod_deriv_easy (e04gz).
[fvec, fjac, user] = lsfun2(m, n, xc, ldfjac, user)
Input Parameters
- 1:
– int64int32nag_int scalar
-
, the numbers of residuals.
- 2:
– int64int32nag_int scalar
-
, the numbers of variables.
- 3:
– double array
-
The point at which the values of the and the are required.
- 4:
– int64int32nag_int scalar
-
The first dimension of the array
fjac.
- 5:
– Any MATLAB object
lsfun2 is called from
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) with the object supplied to
nag_opt_lsq_uncon_mod_deriv_easy (e04gz).
Output Parameters
- 1:
– double array
-
must be set to the value of
at the point , for .
- 2:
– double array
-
must be set to the value of at the point , for and .
- 3:
– Any MATLAB object
- 3:
– double array
-
must be set to a guess at the
th component of the position of the minimum, for
. The function checks the first derivatives calculated by
lsfun2 at the starting point and so is more likely to detect any error in your functions if the initial
are nonzero and mutually distinct.
Optional Input Parameters
- 1:
– int64int32nag_int scalar
-
Default:
the dimension of the array
x.
The number of residuals, , and the number of variables, .
Constraint:
.
- 2:
– Any MATLAB object
user is not used by
nag_opt_lsq_uncon_mod_deriv_easy (e04gz), but is passed to
lsfun2. Note that for large objects it may be more efficient to use a global variable which is accessible from the m-files than to use
user.
Output Parameters
- 1:
– double array
-
The lowest point found during the calculations. Thus, if on exit, is the th component of the position of the minimum.
- 2:
– double scalar
-
The value of the sum of squares,
, corresponding to the final point stored in
x.
- 3:
– Any MATLAB object
- 4:
– int64int32nag_int scalar
unless the function detects an error (see
Error Indicators and Warnings).
Error Indicators and Warnings
Note: nag_opt_lsq_uncon_mod_deriv_easy (e04gz) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and
do not generate an error of type NAG:error_n. See nag_issue_warnings.
-
-
On entry, | , |
or | , |
or | , when , |
or | , when . |
-
-
There have been
calls of
lsfun2, yet the algorithm does not seem to have converged. This may be due to an awkward function or to a poor starting point, so it is worth restarting
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) from the final point held in
x.
- W
The final point does not satisfy the conditions for acceptance as a minimum, but no lower point could be found.
-
An auxiliary function has been unable to complete a singular value decomposition in a reasonable number of sub-iterations.
- W
- W
- W
- W
-
There is some doubt about whether the point
x found by
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) is a minimum of
. The degree of confidence in the result decreases as
ifail increases. Thus, when
, it is probable that the final
gives a good estimate of the position of a minimum, but when
it is very unlikely that the function has found a minimum.
-
It is very likely that you have made an error in forming the derivatives
in
lsfun2.
-
An unexpected error has been triggered by this routine. Please
contact
NAG.
-
Your licence key may have expired or may not have been installed correctly.
-
Dynamic memory allocation failed.
If you are not satisfied with the result (e.g., because
ifail lies between
and
), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. Repeated failure may indicate some defect in the formulation of the problem.
Accuracy
If the problem is reasonably well scaled and a successful exit is made, then, for a computer with a mantissa of decimals, one would expect to get about decimals accuracy in the components of and between (if is of order at the minimum) and (if is close to zero at the minimum) decimals accuracy in .
Further Comments
The number of iterations required depends on the number of variables, the number of residuals and their behaviour, and the distance of the starting point from the solution. The number of multiplications performed per iteration of
nag_opt_lsq_uncon_mod_deriv_easy (e04gz) varies, but for
is approximately
. In addition, each iteration makes at least one call of
lsfun2. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in
lsfun2.
Ideally, the problem should be scaled so that the minimum value of the sum of squares is in the range and so that at points a unit distance away from the solution the sum of squares is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_uncon_mod_deriv_easy (e04gz) will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to
nag_opt_lsq_uncon_covariance (e04yc), using information returned in segments of the workspace array
w. See
nag_opt_lsq_uncon_covariance (e04yc) for further details.
Example
This example finds least squares estimates of
,
and
in the model
using the
sets of data given in the following table.
The program uses
as the initial guess at the position of the minimum.
Open in the MATLAB editor:
e04gz_example
function e04gz_example
fprintf('e04gz example results\n\n');
global y t;
m = int64(15);
y = [ 0.14, 0.18, 0.22, 0.25, 0.29,...
0.32, 0.35, 0.39, 0.37, 0.58,...
0.73, 0.96, 1.34, 2.10, 4.39];
t = [ 1.0, 15.0, 1.0;
2.0, 14.0, 2.0;
3.0, 13.0, 3.0;
4.0, 12.0, 4.0;
5.0, 11.0, 5.0;
6.0, 10.0, 6.0;
7.0, 9.0, 7.0;
8.0, 8.0, 8.0;
9.0, 7.0, 7.0;
10.0, 6.0, 6.0;
11.0, 5.0, 5.0;
12.0, 4.0, 4.0;
13.0, 3.0, 3.0;
14.0, 2.0, 2.0;
15.0, 1.0, 1.0];
n = 3;
x = [0.5; 1; 1.5];
[x, fsumsq, user, ifail] = e04gz(m, @lsfun2, x);
fprintf('Best fit model parameters are:\n');
for i = 1:n
fprintf(' x_%d = %10.3f\n',i,x(i));
end
fprintf('\nSum of squares of residuals = %7.4f\n',fsumsq);
function [fvecc, fjacc, user] = lsfun2(m, n, xc, ljc, user)
global y t;
fvecc = zeros(m, 1);
fjacc = zeros(ljc, n);
for i = 1:m
denom = xc(2)*t(i,2) + xc(3)*t(i,3);
fvecc(i) = xc(1) + t(i,1)/denom - y(i);
fjacc(i,1) = 1;
dummy = -1/(denom*denom);
fjacc(i,2) = t(i,1)*t(i,2)*dummy;
fjacc(i,3) = t(i,1)*t(i,3)*dummy;
end
e04gz example results
Best fit model parameters are:
x_1 = 0.082
x_2 = 1.133
x_3 = 2.344
Sum of squares of residuals = 0.0082
PDF version (NAG web site
, 64-bit version, 64-bit version)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2015