naginterfaces.library.opt.lsq_uncon_mod_func_comp¶
- naginterfaces.library.opt.lsq_uncon_mod_func_comp(m, lsqfun, x, lsqmon=None, iprint=1, maxcal=None, eta=None, xtol=0.0, stepmx=100000.0, data=None)[source]¶
lsq_uncon_mod_func_comp
is a comprehensive algorithm for finding an unconstrained minimum of a sum of squares of nonlinear functions in variables . No derivatives are required.The function is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
For full information please refer to the NAG Library document for e04fc
https://support.nag.com/numeric/nl/nagdoc_30.2/flhtml/e04/e04fcf.html
- Parameters
- mint
The number of residuals, , and the number of variables, .
- lsqfuncallable (iflag, fvec) = lsqfun(iflag, m, xc, data=None)
must calculate the vector of values at any point . (However, if you do not wish to calculate the residuals at a particular , there is the option of setting an argument to cause
lsq_uncon_mod_func_comp
to terminate immediately.)- Parameters
- iflagint
Has a non-negative value.
- mint
, the numbers of residuals.
- xcfloat, ndarray, shape
The point at which the values of the are required.
- dataarbitrary, optional, modifiable in place
User-communication data for callback functions.
- Returns
- iflagint
If resets to some negative number,
lsq_uncon_mod_func_comp
will terminate immediately, with set to your setting of .- fvecfloat, array-like, shape
Unless is reset to a negative number, must contain the value of at the point , for .
- xfloat, array-like, shape
must be set to a guess at the th component of the position of the minimum, for .
- lsqmonNone or callable lsqmon(xc, fvec, fjac, s, igrade, niter, nf, data=None), optional
Note: if this argument is None then a NAG-supplied facility will be used.
If , you must supply which is suitable for monitoring the minimization process. must not change the values of any of its arguments.
- Parameters
- xcfloat, ndarray, shape
The coordinates of the current point .
- fvecfloat, ndarray, shape
The values of the residuals at the current point .
- fjacfloat, ndarray, shape
contains the value of at the current point , for , for .
- sfloat, ndarray, shape
The singular values of the current approximation to the Jacobian matrix. Thus may be useful as information about the structure of your problem.
- igradeint
lsq_uncon_mod_func_comp
estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see Gill and Murray (1978)). This estimate is called the grade of the Jacobian matrix, and gives its current value.- niterint
The number of iterations which have been performed in
lsq_uncon_mod_func_comp
.- nfint
The number of times that has been called so far. (However, for intermediate calls of , is calculated on the assumption that the latest linear search has been successful. If this is not the case, the evaluations allowed for approximating the Jacobian at the new point will not in fact have been made. will be accurate at the final call of .)
- dataarbitrary, optional, modifiable in place
User-communication data for callback functions.
- iprintint, optional
The frequency with which is to be called.
If , is called once every iterations and just before exit from
lsq_uncon_mod_func_comp
.If , is just called at the final point.
If , is not called at all.
should normally be set to a small positive number.
- maxcalNone or int, optional
Note: if this argument is None then a default value will be used, determined as follows: .
The limit you set on the number of times that may be called by
lsq_uncon_mod_func_comp
. There will be an error exit (see Exceptions) after calls of .- etaNone or float, optional
Note: if this argument is None then a default value will be used, determined as follows: if : ; otherwise: .
Every iteration of
lsq_uncon_mod_func_comp
involves a linear minimization, i.e., minimization of with respect to .Specifies how accurately the linear minimizations are to be performed. The minimum with respect to will be located more accurately for small values of (say, ) than for large values (say, ). Although accurate linear minimizations will generally reduce the number of iterations performed by
lsq_uncon_mod_func_comp
, they will increase the number of calls of made each iteration. On balance it is usually more efficient to perform a low accuracy minimization.- xtolfloat, optional
The accuracy in to which the solution is required.
If is the true value of at the minimum, then , the estimated position before a normal exit, is such that
where .
For example, if the elements of are not much larger than in modulus and if , then is usually accurate to about five decimal places. (For further details see Accuracy.)
- stepmxfloat, optional
An estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.)
lsq_uncon_mod_func_comp
will ensure that, for each iteration,where is the iteration number. Thus, if the problem has more than one solution,
lsq_uncon_mod_func_comp
is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of . However, an underestimate of can lead to inefficiency.- dataarbitrary, optional
User-communication data for callback functions.
- Returns
- xfloat, ndarray, shape
The final point . Thus, if no exception or warning is raised on exit, is the th component of the estimated position of the minimum.
- fsumsqfloat
The value of , the sum of squares of the residuals , at the final point given in .
- fvecfloat, ndarray, shape
The value of the residual at the final point given in , for .
- fjacfloat, ndarray, shape
The estimate of the first derivative at the final point given in , for , for .
- sfloat, ndarray, shape
The singular values of the estimated Jacobian matrix at the final point. Thus may be useful as information about the structure of your problem.
- vfloat, ndarray, shape
The matrix associated with the singular value decomposition
of the estimated Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of .
- niterint
The number of iterations which have been performed in
lsq_uncon_mod_func_comp
.- nfint
The number of times that the residuals have been evaluated (i.e., number of calls of ).
- Raises
- NagValueError
- (errno )
On entry, and .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, and .
Constraint: .
- (errno )
On entry, .
Constraint: .
- Warns
- NagAlgorithmicWarning
- (errno )
User requested termination by setting negative in .
- (errno )
There have been calls to .
- (errno )
The conditions for a minimum have not all been satisfied, but a lower point could not be found.
- (errno )
Failure in computing SVD of estimated Jacobian matrix.
- Notes
In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility.
lsq_uncon_mod_func_comp
is essentially identical to the function LSQNDN in the NPL Algorithms Library. It is applicable to problems of the formwhere and . (The functions are often referred to as ‘residuals’.)
You must supply to calculate the values of the at any point .
From a starting point supplied by you, the function generates a sequence of points , which is intended to converge to a local minimum of . The sequence of points is given by
where the vector is a direction of search, and is chosen such that is approximately a minimum with respect to .
The vector used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then is an approximation to the Gauss–Newton direction; otherwise additional function evaluations are made so as to enable to be a more accurate approximation to the Newton direction.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton’s method.
- References
Gill, P E and Murray, W, 1978, Algorithms for the solution of the nonlinear least squares problem, SIAM J. Numer. Anal. (15), 977–992