naginterfaces.library.opt.lsq_uncon_quasi_deriv_comp¶
- naginterfaces.library.opt.lsq_uncon_quasi_deriv_comp(m, selct, lsqfun, eta, xtol, x, lsqmon=None, iprint=1, maxcal=None, stepmx=100000.0, data=None, spiked_sorder='C')[source]¶
lsq_uncon_quasi_deriv_comp
is a comprehensive quasi-Newton algorithm for finding an unconstrained minimum of a sum of squares of nonlinear functions in variables . First derivatives are required. The function is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).Deprecated since version 28.3.0.0:
lsq_uncon_quasi_deriv_comp
is deprecated. Please usehandle_solve_bxnl()
instead. See also the Replacement Calls document.For full information please refer to the NAG Library document for e04gb
https://www.nag.com/numeric/nl/nagdoc_29.2/flhtml/e04/e04gbf.html
- Parameters
- mint
The number of residuals, , and the number of variables, .
- selctint
enables you to specify whether the linear minimizations (i.e., minimizations of with respect to ) are to be performed by a function which just requires the evaluation of the (), or by a function which also requires the first derivatives of the ().
It will often be possible to evaluate the first derivatives of the residuals in about the same amount of computer time that is required for the evaluation of the residuals themselves – if this is so, then
lsq_uncon_quasi_deriv_comp
should be called with .However, if the evaluation of the derivatives takes more than about times as long as the evaluation of the residuals, will usually be preferable.
If in doubt, use as it is slightly more robust.
- lsqfuncallable (iflag, fvec, fjac) = lsqfun(iflag, m, xc, data=None)
must calculate the vector of values and Jacobian matrix of first derivatives at any point . (However, if you do not wish to calculate the residuals or first derivatives at a particular , there is the option of setting an argument to cause
lsq_uncon_quasi_deriv_comp
to terminate immediately.)- Parameters
- iflagint
Will be set to , or .
Indicates that only the residuals need to be evaluated
Indicates that only the Jacobian matrix needs to be evaluated
Indicates that both the residuals and the Jacobian matrix must be calculated.
If , will always be called with set to .
- mint
, the number of residuals.
- xcfloat, ndarray, shape
The point at which the values of the and the are required.
- dataarbitrary, optional, modifiable in place
User-communication data for callback functions.
- Returns
- iflagint
If it is not possible to evaluate the or their first derivatives at the point given in (or if it is wished to stop the calculations for any other reason), you should reset to some negative number and return control to
lsq_uncon_quasi_deriv_comp
.lsq_uncon_quasi_deriv_comp
will then terminate immediately, with set to your setting of .- fvecfloat, array-like, shape
Unless on entry, or is reset to a negative number, must contain the value of at the point , for .
- fjacfloat, array-like, shape
Unless on entry, or is reset to a negative number, must contain the value of at the point , for , for .
- etafloat
Every iteration of
lsq_uncon_quasi_deriv_comp
involves a linear minimization (i.e., minimization of with respect to ). specifies how accurately these linear minimizations are to be performed. The minimum with respect to will be located more accurately for small values of (say, ) than for large values (say, ).Although accurate linear minimizations will generally reduce the number of iterations performed by
lsq_uncon_quasi_deriv_comp
, they will increase the number of calls of made every iteration.On balance it is usually more efficient to perform a low accuracy minimization.
Suggested value:
if and ,
if and ,
if .
- xtolfloat
The accuracy in to which the solution is required.
If is the true value of at the minimum, then , the estimated position before a normal exit, is such that
where .
For example, if the elements of are not much larger than in modulus and if , then is usually accurate to about five decimal places. (For further details see Accuracy.)
If and the variables are scaled roughly as described in Further Comments and is the machine precision, then a setting of order will usually be appropriate.
If is set to or some positive value less than ,
lsq_uncon_quasi_deriv_comp
will use instead of , since is probably the smallest reasonable setting.- xfloat, array-like, shape
must be set to a guess at the th component of the position of the minimum, for .
- lsqmonNone or callable lsqmon(xc, fvec, fjac, s, igrade, niter, nf, data=None), optional
Note: if this argument is None then a NAG-supplied facility will be used.
If , you must supply which is suitable for monitoring the minimization process. must not change the values of any of its arguments.
- Parameters
- xcfloat, ndarray, shape
The coordinates of the current point .
- fvecfloat, ndarray, shape
The values of the residuals at the current point .
- fjacfloat, ndarray, shape
contains the value of at the current point , for , for .
- sfloat, ndarray, shape
The singular values of the current Jacobian matrix. Thus may be useful as information about the structure of your problem.
- igradeint
lsq_uncon_quasi_deriv_comp
estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see Gill and Murray (1978)). This estimate is called the grade of the Jacobian matrix, and gives its current value.- niterint
The number of iterations which have been performed in
lsq_uncon_quasi_deriv_comp
.- nfint
The number of evaluations of the residuals. (If , is also the number of evaluations of the Jacobian matrix.)
- dataarbitrary, optional, modifiable in place
User-communication data for callback functions.
- iprintint, optional
The frequency with which is to be called.
is called once every iterations and just before exit from
lsq_uncon_quasi_deriv_comp
.is just called at the final point.
is not called at all.
should normally be set to a small positive number.
- maxcalNone or int, optional
Note: if this argument is None then a default value will be used, determined as follows: if : ; otherwise: .
Enables you to limit the number of times that is called by
lsq_uncon_quasi_deriv_comp
. There will be an error exit (see Exceptions) after calls of .- stepmxfloat, optional
An estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency, a slight overestimate is preferable.)
lsq_uncon_quasi_deriv_comp
will ensure that, for each iteration,where is the iteration number.
Thus, if the problem has more than one solution,
lsq_uncon_quasi_deriv_comp
is most likely to find the one nearest to the starting point.On difficult problems, a realistic choice can prevent the sequence of entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of .
However, an underestimate of can lead to inefficiency.
- dataarbitrary, optional
User-communication data for callback functions.
- spiked_sorderstr, optional
If in is spiked (i.e., has unit extent in all but one dimension, or has size ), selects the storage order to associate with it in the NAG Engine:
- spiked_sorder =
row-major storage will be used;
- spiked_sorder =
column-major storage will be used.
- Returns
- xfloat, ndarray, shape
The final point . Thus, if no exception or warning is raised on exit, is the th component of the estimated position of the minimum.
- fsumsqfloat
The value of , the sum of squares of the residuals , at the final point given in .
- fvecfloat, ndarray, shape
The value of the residual at the final point given in , for .
- fjacfloat, ndarray, shape
The value of the first derivative evaluated at the final point given in , for , for .
- sfloat, ndarray, shape
The singular values of the Jacobian matrix at the final point. Thus may be useful as information about the structure of your problem.
- vfloat, ndarray, shape
The matrix associated with the singular value decomposition
of the Jacobian matrix at the final point, stored by columns. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of .
- niterint
The number of iterations which have been performed in
lsq_uncon_quasi_deriv_comp
.- nfint
The number of times that the residuals have been evaluated (i.e., the number of calls of ). If , is also the number of times that the Jacobian matrix has been evaluated.
- Raises
- NagValueError
- (errno )
On entry, .
Constraint: or .
- (errno )
On entry, and .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
On entry, and .
Constraint: .
- (errno )
On entry, .
Constraint: .
- (errno )
There have been calls to .
- Warns
- NagAlgorithmicWarning
- (errno )
User requested termination by setting negative in .
- (errno )
The conditions for a minimum have not all been satisfied, but a lower point could not be found.
- NagAlgorithmicMajorWarning
- (errno )
Failure in computing SVD of Jacobian matrix.
- Notes
In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility.
lsq_uncon_quasi_deriv_comp
is essentially identical to the function LSQFDQ in the NPL Algorithms Library. It is applicable to problems of the form:where and . (The functions are often referred to as ‘residuals’.)
You must supply a function to calculate the values of the and their first derivatives at any point .
From a starting point supplied by you, the function generates a sequence of points , which is intended to converge to a local minimum of . The sequence of points is given by
where the vector is a direction of search, and is chosen such that is approximately a minimum with respect to .
The vector used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then is the Gauss–Newton direction; otherwise the second derivatives of the are taken into account using a quasi-Newton updating scheme.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton’s method.
- References
Gill, P E and Murray, W, 1978, Algorithms for the solution of the nonlinear least squares problem, SIAM J. Numer. Anal. (15), 977–992