e04gd is a comprehensive modified Gauss–Newton algorithm for finding an unconstrained minimum of a sum of squares of nonlinear functions in variables . First derivatives are required.
The method is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
Syntax
C# |
---|
public static void e04gd( int m, int n, E04..::..E04GD_LSQFUN lsqfun, E04..::..E04GD_LSQMON lsqmon, int iprint, int maxcal, double eta, double xtol, double stepmx, double[] x, out double fsumsq, double[] fvec, double[,] fjac, double[] s, double[,] v, out int niter, out int nf, out int ifail ) |
Visual Basic |
---|
Public Shared Sub e04gd ( _ m As Integer, _ n As Integer, _ lsqfun As E04..::..E04GD_LSQFUN, _ lsqmon As E04..::..E04GD_LSQMON, _ iprint As Integer, _ maxcal As Integer, _ eta As Double, _ xtol As Double, _ stepmx As Double, _ x As Double(), _ <OutAttribute> ByRef fsumsq As Double, _ fvec As Double(), _ fjac As Double(,), _ s As Double(), _ v As Double(,), _ <OutAttribute> ByRef niter As Integer, _ <OutAttribute> ByRef nf As Integer, _ <OutAttribute> ByRef ifail As Integer _ ) |
Visual C++ |
---|
public: static void e04gd( int m, int n, E04..::..E04GD_LSQFUN^ lsqfun, E04..::..E04GD_LSQMON^ lsqmon, int iprint, int maxcal, double eta, double xtol, double stepmx, array<double>^ x, [OutAttribute] double% fsumsq, array<double>^ fvec, array<double,2>^ fjac, array<double>^ s, array<double,2>^ v, [OutAttribute] int% niter, [OutAttribute] int% nf, [OutAttribute] int% ifail ) |
F# |
---|
static member e04gd : m : int * n : int * lsqfun : E04..::..E04GD_LSQFUN * lsqmon : E04..::..E04GD_LSQMON * iprint : int * maxcal : int * eta : float * xtol : float * stepmx : float * x : float[] * fsumsq : float byref * fvec : float[] * fjac : float[,] * s : float[] * v : float[,] * niter : int byref * nf : int byref * ifail : int byref -> unit |
Parameters
- m
- Type: System..::..Int32On entry: the number of residuals, , and the number of variables, .Constraint: .
- n
- Type: System..::..Int32On entry: the number of residuals, , and the number of variables, .Constraint: .
- lsqfun
- Type: NagLibrary..::..E04..::..E04GD_LSQFUNlsqfun must calculate the vector of values and Jacobian matrix of first derivatives at any point . (However, if you do not wish to calculate the residuals or first derivatives at a particular , there is the option of setting a parameter to cause e04gd to terminate immediately.)
A delegate of type E04GD_LSQFUN.
- lsqmon
- Type: NagLibrary..::..E04..::..E04GD_LSQMONIf , you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.If , the dummy method E04FDZ can be used as lsqmon.
A delegate of type E04GD_LSQMON.
Note: you should normally print the sum of squares of residuals, so as to be able to examine the sequence of values of mentioned in [Accuracy]. It is usually also helpful to print xc, the gradient of the sum of squares, niter and nf.
- iprint
- Type: System..::..Int32On entry: the frequency with which lsqmon is to be called.
- lsqmon is called once every iprint iterations and just before exit from e04gd.
- lsqmon is just called at the final point.
- lsqmon is not called at all.
iprint should normally be set to a small positive number.Suggested value: .
- maxcal
- Type: System..::..Int32On entry: enables you to limit the number of times that lsqfun is called by e04gd. There will be an error exit (see [Error Indicators and Warnings]) after maxcal evaluations of the residuals (i.e., calls of lsqfun with iflag set to ). It should be borne in mind that, in addition to the calls of lsqfun which are limited directly by maxcal, there will be calls of lsqfun (with iflag set to ) to evaluate only first derivatives.Suggested value: .Constraint: .
- eta
- Type: System..::..DoubleOn entry: every iteration of e04gd involves a linear minimization, i.e., minimization of with respect to . eta specifies how accurately these linear minimizations are to be performed. The minimum with respect to will be located more accurately for small values of eta (say, ) than for large values (say, ).Suggested value: ( if ).Constraint: .
- xtol
- Type: System..::..DoubleOn entry: the accuracy in to which the solution is required.If is the true value of at the minimum, then , the estimated position before a normal exit, is such thatwhere . For example, if the elements of are not much larger than in modulus and if , then is usually accurate to about five decimal places. (For further details see [Accuracy].)If and the variables are scaled roughly as described in [Further Comments] and is the machine precision, then a setting of order will usually be appropriate. If xtol is set to or some positive value less than , e04gd will use instead of xtol, since is probably the smallest reasonable setting.Constraint: .
- stepmx
- Type: System..::..DoubleOn entry: an estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.) e04gd will ensure that, for each iteration,where is the iteration number. Thus, if the problem has more than one solution, e04gd is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of . However, an underestimate of stepmx can lead to inefficiency.Suggested value: .Constraint: .
- x
- Type: array<System..::..Double>[]()[][]An array of size [n]On entry: must be set to a guess at the th component of the position of the minimum, for .On exit: the final point . Thus, if on exit, is the th component of the estimated position of the minimum.
- fsumsq
- Type: System..::..Double%On exit: the value of , the sum of squares of the residuals , at the final point given in x.
- fvec
- Type: array<System..::..Double>[]()[][]An array of size [m]On exit: the value of the residual at the final point given in x, for .
- fjac
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, n]Note: dim1 must satisfy the constraint:On exit: the value of the first derivative evaluated at the final point given in x, for and .
- s
- Type: array<System..::..Double>[]()[][]An array of size [n]On exit: the singular values of the Jacobian matrix at the final point. Thus s may be useful as information about the structure of your problem.
- v
- Type: array<System..::..Double,2>[,](,)[,][,]An array of size [dim1, n]Note: dim1 must satisfy the constraint:On exit: the matrix associated with the singular value decompositionof the Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of .
- niter
- Type: System..::..Int32%On exit: the number of iterations which have been performed in e04gd.
- nf
- Type: System..::..Int32%
- ifail
- Type: System..::..Int32%On exit: unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).
Description
e04gd is essentially identical to the method LSQFDN in the NPL Algorithms Library. It is applicable to problems of the form
where and . (The functions are often referred to as ‘residuals’.)
You must supply a method to calculate the values of the and their first derivatives at any point .
From a starting point supplied by you, the method generates a sequence of points , which is intended to converge to a local minimum of . The sequence of points is given by
where the vector is a direction of search, and is chosen such that is approximately a minimum with respect to .
The vector used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then is the Gauss–Newton direction; otherwise finite difference estimates of the second derivatives of the are taken into account.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.
References
Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal. 15 977–992
Error Indicators and Warnings
Note: e04gd may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the method:
Some error messages may refer to parameters that are dropped from this interface
(LDFJAC, LDV, IW, LIW, W, LW) In these
cases, an error in another parameter has usually caused an incorrect value to be inferred.
On entry, , or , or , or , or , or , or ,
- There have been maxcal evaluations of the residuals. If steady reductions in the sum of squares, , were monitored up to the point where this exit occurred, then the exit probably occurred simply because maxcal was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that has no minimum.
- The conditions for a minimum have not all been satisfied, but a lower point could not be found. This could be because xtol has been set so small that rounding errors in the evaluation of the residuals and derivatives make attainment of the convergence conditions impossible.
- The method for computing the singular value decomposition of the Jacobian matrix has failed to converge in a reasonable number of sub-iterations. It may be worth applying e04gd again starting with an initial approximation which is not too close to the point at which the failure occurred.
- An error occured, see message report.
- Invalid Parameters
- Invalid dimension for array
- Negative dimension for array
- Invalid Parameters
The values , or may also be caused by mistakes in lsqfun, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
Accuracy
A successful exit () is made from e04gd when the matrix of approximate second derivatives of is positive definite, and when (B1, B2 and B3) or B4 or B5 hold, where
and where and are as defined in xtol, and and are the values of and its vector of estimated first derivatives at .
If then the vector in x on exit, , is almost certainly an estimate of , the position of the minimum to the accuracy specified by xtol.
If , then may still be a good estimate of , but to verify this you should make the following checks. If
(a) | the sequence converges to at a superlinear or a fast linear rate, and |
(b) | , where denotes transpose, then it is almost certain that is a close approximation to the minimum. |
When (b) is true, then usually is a close approximation to . The values of can be calculated in lsqmon, and the vector can be calculated from the contents of fvec and fjac on exit from e04gd.
Further suggestions about confirmation of a computed solution are given in the E04 class.
Parallelism and Performance
None.
Further Comments
The number of iterations required depends on the number of variables, the number of residuals, the behaviour of , the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of e04gd varies, but for is approximately . In addition, each iteration makes at least one call of lsqfun. So, unless the residuals and their derivatives can be evaluated very quickly, the run time will be dominated by the time spent in lsqfun.
Ideally, the problem should be scaled so that, at the solution, and the corresponding values of the are each in the range , and so that at points one unit away from the solution, differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04gd will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to (E04YCF not in this release), using information returned in the arrays s and v. See (E04YCF not in this release) for further details.