e04ggc is part of the new NAG optimization modelling suite (see Section 4.1 in the E04 Chapter Introduction), therefore the definition of the nonlinear residual function values and gradients need to be split into two separate subroutines. e04ggc offers a significant improvement in performance over e04gbc as well as additional functionality, such as the addition of variable bounds and user-evaluation recovery, amongst many others.
e04gbc is a comprehensive algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $(m\ge n)$. First derivatives are required.
e04gbc is intended for objective functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
where $x={({x}_{1},{x}_{2},\dots ,{x}_{n})}^{\mathrm{T}}$ and $m\ge n$. (The functions ${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.) You must supply a function to calculate the values of the ${f}_{i}\left(x\right)$ and their first derivatives $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point $x$.
From a starting point ${x}^{\left(1\right)}$e04gbc generates a sequence of points ${x}^{\left(2\right)},{x}^{\left(3\right)},\dots ,$ which is intended to converge to a local minimum of $F\left(x\right)$. The sequence of points is given by
where the vector ${p}^{\left(k\right)}$ is a direction of search, and ${\alpha}^{\left(k\right)}$ is chosen such that $F({x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)})$ is approximately a minimum with respect to ${\alpha}^{\left(k\right)}$.
The vector ${p}^{\left(k\right)}$ used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then ${p}^{\left(k\right)}$ is the Gauss–Newton direction; otherwise the second derivatives of the ${f}_{i}\left(x\right)$ are taken into account using a quasi-Newton updating scheme.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.
4References
Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least squares problem SIAM J. Numer. Anal.15 977–992
5Arguments
1: $\mathbf{m}$ – IntegerInput
On entry: $m$, the number of residuals, ${f}_{i}\left(x\right)$.
2: $\mathbf{n}$ – IntegerInput
On entry: $n$, the number of variables, ${x}_{j}$.
Constraint:
$1\le {\mathbf{n}}\le {\mathbf{m}}$.
3: $\mathbf{lsqfun}$ – function, supplied by the userExternal Function
lsqfun must calculate the vector of values ${f}_{i}\left(x\right)$ and their first derivatives $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point $x$. (However, if you do not wish to calculate the residuals at a particular $x$, there is the option of setting an argument to cause e04gbc to terminate immediately.)
On exit: unless $\mathbf{comm}\mathbf{\to}\mathbf{flag}=1$ on entry, or $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ is reset to a negative number, then ${\mathbf{fvec}}\left[\mathit{i}-1\right]$ must contain the value of ${f}_{\mathit{i}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$.
On exit: unless $\mathbf{comm}\mathbf{\to}\mathbf{flag}=0$ on entry, or $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ is reset to a negative number, then ${\mathbf{fjac}}\left[(\mathit{i}-1)\times {\mathbf{tdfjac}}+\mathit{j}-1\right]$ must contain the value of the first derivative $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
6: $\mathbf{tdfjac}$ – IntegerInput
On entry: the stride separating matrix column elements in the array fjac.
7: $\mathbf{comm}$ – Nag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to lsqfun.
flag – IntegerInput/Output
On entry: $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ contains $0$, 1 or $2$. The value 0 indicates that only the residuals need to be evaluated, the value 1 indicates that only the Jacobian matrix needs to be evaluated, and the value 2 indicates that both the residuals and the Jacobian matrix must be calculated. (If the default value of the optional parameter ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ is used (i.e., ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag\_Lin\_Deriv}$), then lsqfun will always be called with $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ set to 2.)
On exit: if lsqfun resets $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ to some negative number then e04gbc will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to e04gbc, ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be set to the user's setting of $\mathbf{comm}\mathbf{\to}\mathbf{flag}$.
first – Nag_BooleanInput
On entry: will be set to Nag_TRUE on the first call to lsqfun and Nag_FALSE for all subsequent calls.
nf – IntegerInput
On entry: the number of calls made to lsqfun including the current one.
user – double *
iuser – Integer *
p – Pointer
The type Pointer will be void * with a C compiler that defines void * and char * otherwise. Before calling e04gbc these pointers may be allocated memory and initialized with various quantities for use by lsqfun when called from e04gbc.
Note:lsqfun should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by e04gbc. If your code inadvertently does return any NaNs or infinities, e04gbc is likely to produce unexpected results.
Note: lsqfun should be tested separately before being used in conjunction with e04gbc. Function e04yac may be used to check the derivatives.
On entry: ${\mathbf{x}}\left[\mathit{j}-1\right]$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
On exit: the final point ${x}^{*}$. On successful exit, ${\mathbf{x}}\left[j-1\right]$ is the $j$th component of the estimated position of the minimum.
5: $\mathbf{fsumsq}$ – double *Output
On exit: the value of $F\left(x\right)$, the sum of squares of the residuals ${f}_{i}\left(x\right)$, at the final point given in x.
On exit: ${\mathbf{fvec}}\left[\mathit{i}-1\right]$ is the value of the residual ${f}_{\mathit{i}}\left(x\right)$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$.
On exit: ${\mathbf{fjac}}\left[\left(\mathit{i}-1\right)\times {\mathbf{tdfjac}}+\mathit{j}-1\right]$ contains the value of the first derivative $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
8: $\mathbf{tdfjac}$ – IntegerInput
On entry: the stride separating matrix column elements in the array fjac.
Constraint:
${\mathbf{tdfjac}}\ge {\mathbf{n}}$.
9: $\mathbf{options}$ – Nag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional parameters for e04gbc. These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given in Section 11.2.
If any of these optional parameters are required then the structure options should be declared and initialized by a call to e04xxc and supplied as an argument to e04gbc. However, if the optional parameters are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
10: $\mathbf{comm}$ – Nag_Comm *Input/Output
Note:comm is a NAG defined type (see Section 3.1.1 in the Introduction to the NAG Library CL Interface).
On entry/exit: structure containing pointers for communication to the user-supplied function; see the above description of lsqfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to e04gbc; comm will then be declared internally for use in calls to the user-supplied function.
11: $\mathbf{fail}$ – NagError *Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
The exits NW_TOO_MANY_ITER, NW_COND_MIN, and NE_SVD_FAIL may also be caused by mistakes in lsqfun, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
NE_2_INT_ARG_LT
On entry, ${\mathbf{m}}=\u27e8\mathit{\text{value}}\u27e9$ while ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$. These arguments must satisfy ${\mathbf{m}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}=\u27e8\mathit{\text{value}}\u27e9$ while ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{tdfjac}}=\u27e8\mathit{\text{value}}\u27e9$ while ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$. These arguments must satisfy ${\mathbf{tdfjac}}\ge {\mathbf{n}}$.
NE_2_REAL_ARG_LT
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{step\_max}}=\u27e8\mathit{\text{value}}\u27e9$ while ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}=\u27e8\mathit{\text{value}}\u27e9$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{step\_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_BAD_PARAM
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$ had an illegal value.
NE_DERIV_ERRORS
Large errors were found in the derivatives of the objective function.
You should check carefully the derivation and programming of expressions for the $\frac{\partial {f}_{i}}{\partial {x}_{j}}$, because it is very unlikely that lsqfun is calculating them correctly.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INVALID_INT_RANGE_1
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max\_iter}}$ not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max\_iter}}\ge 0$.
NE_INVALID_REAL_RANGE_EF
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ not valid. Correct range is $\u27e8\mathit{\text{value}}\u27e9$$\le {\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}<1.0$.
NE_INVALID_REAL_RANGE_FF
Value $\u27e8\mathit{\text{value}}\u27e9$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}$ not valid. Correct range is $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}<1.0$.
NE_NOT_APPEND_FILE
Cannot open file $\u27e8\mathit{string}\u27e9$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $\u27e8\mathit{string}\u27e9$.
NE_OPT_NOT_INIT
Options structure not initialized.
NE_SVD_FAIL
The computation of the singular value decomposition of the Jacobian matrix has failed to converge in a reasonable number of sub-iterations.
It may be worth applying e04gbc again starting with an initial approximation which is not too close to the point at which the failure occurred.
NE_USER_STOP
User requested termination, user flag value $\text{}=\u27e8\mathit{\text{value}}\u27e9$.
This exit occurs if you set $\mathbf{comm}\mathbf{\to}\mathbf{flag}$ to a negative value in lsqfun. If fail is supplied the value of ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be the same as your setting of $\mathbf{comm}\mathbf{\to}\mathbf{flag}$.
NE_WRITE_ERROR
Error occurred when writing to file $\u27e8\mathit{string}\u27e9$.
NW_COND_MIN
The conditions for a minimum have not all been satisfied, but a lower point could not be found.
This could be because ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ has been set so small that rounding errors in the evaluation of the residuals make attainment of the convergence conditions impossible.
See Section 7 for further information.
NW_TOO_MANY_ITER
The maximum number of iterations, $\u27e8\mathit{\text{value}}\u27e9$, have been performed.
If steady reductions in the sum of squares, $F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because ${\mathbf{options}}\mathbf{.}{\mathbf{max\_iter}}$ was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.
7Accuracy
If the problem is reasonably well scaled and a successful exit is made, then, for a computer with a mantissa of $t$ decimals, one would expect to get about $t/2-1$ decimals accuracy in the components of $x$ and between $t-1$ (if $F\left(x\right)$ is of order 1 at the minimum) and $2t-2$ (if $F\left(x\right)$ is close to zero at the minimum) decimals accuracy in $F\left(x\right)$.
A successful exit (${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE\_NOERROR}$) is made from e04gbc when (B1, B2 and B3) or B4 or B5 hold, where
and where $\Vert \text{.}\Vert $, $\epsilon $ and the optional parameter ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$ are as defined in Section 11.2, while ${F}^{\left(k\right)}$ and ${g}^{\left(k\right)}$ are the values of $F\left(x\right)$ and its vector of first derivatives at ${x}^{\left(k\right)}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE\_NOERROR}$ then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of ${x}_{\mathrm{true}}$, the position of the minimum to the accuracy specified by ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW\_COND\_MIN}}$, then ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but to verify this you should make the following checks. If
(a)the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, and
where $\mathrm{T}$ denotes transpose, then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the minimum. When (b) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$.
Further suggestions about confirmation of a computed solution are given in the E04 Chapter Introduction.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
e04gbc is not threaded in any implementation.
9Further Comments
The number of iterations required depends on the number of variables, the number of residuals, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of e04gbc varies, but for $m>>n$ is approximately $n\times {m}^{2}+O\left({n}^{3}\right)$. In addition, each iteration makes at least one call of lsqfun. So, unless the residuals can be evaluated very quickly, the run time will be dominated by the time spent in lsqfun.
Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $(\mathrm{-1},+1)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of $F\left(x\right)$ at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04gbc will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to e04ycc, using information returned in the arrays ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. See e04ycc for further details.
10Example
This example finds the least squares estimates of ${x}_{1}$, ${x}_{2}$ and ${x}_{3}$ in the model
using the 15 sets of data given in the following table.
$\mathit{y}$
${\mathit{t}}_{1}$
${\mathit{t}}_{2}$
${\mathit{t}}_{3}$
0.14
1.0
15.0
1.0
0.18
2.0
14.0
2.0
0.22
3.0
13.0
3.0
0.25
4.0
12.0
4.0
0.29
5.0
11.0
5.0
0.32
6.0
10.0
6.0
0.35
7.0
9.0
7.0
0.39
8.0
8.0
8.0
0.37
9.0
7.0
7.0
0.58
10.0
6.0
6.0
0.73
11.0
5.0
5.0
0.96
12.0
4.0
4.0
1.34
13.0
3.0
3.0
2.10
14.0
2.0
2.0
4.39
15.0
1.0
1.0
The program uses ($0.5$, $1.0$, 1.5) as the initial guess at the position of the minimum.
The program shows the use of certain optional parameters, with some option values being assigned directly within the program text and by reading values from a data file.
The options structure is declared and initialized by e04xxc. A value is then assigned directly to options ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}$ and three further options are read from the data file by use of e04xyc. The memory freeing function e04xzc is used to free the memory assigned to the pointers in the option structure. You must not use the standard C function free() for this purpose.
A number of optional input and output arguments to e04gbc are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional parameters you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling e04gbc; the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function e04xxc. Values may then be assigned to the structure members in the normal C manner.
After return from e04gbc, the options structure may only be re-used for future calls of e04gbc if the dimensions of the new problem are the same. Otherwise, the structure must be cleared by a call of e04xzc) and re-initialized by a call of e04xxc before future calls. Failure to do this will result in unpredictable behaviour.
Optional parameter settings may also be read from a text file using the function e04xyc in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, this must be done directly in the calling program. They cannot be assigned using e04xyc.
11.1Optional Parameter Checklist and Default Values
For easy reference, the following list shows the members of options which are valid for e04gbc together with their default values where relevant. The number $\epsilon $ is a generic notation for machine precision (see X02AJC).
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag\_TRUE}$ the argument settings in the call to e04gbc will be printed.
print_level – Nag_PrintType
Default $\text{}=\mathrm{Nag\_Soln\_Iter}$
On entry: the level of results printout produced by e04gbc. The following values are available:
$\mathrm{Nag\_NoPrint}$
No output.
$\mathrm{Nag\_Soln}$
The final solution.
$\mathrm{Nag\_Iter}$
One line of output for each iteration.
$\mathrm{Nag\_Soln\_Iter}$
The final solution and one line of output for each iteration.
$\mathrm{Nag\_Soln\_Iter\_Full}$
The final solution and detailed printout at each iteration.
Details of each level of results printout are described in Section 11.3.
Constraint:
${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_NoPrint}$, $\mathrm{Nag\_Soln}$, $\mathrm{Nag\_Iter}$, $\mathrm{Nag\_Soln\_Iter}$ or $\mathrm{Nag\_Soln\_Iter\_Full}$.
outfile – const char[512]
Default $\text{}=\mathtt{stdout}$
On entry: the name of the file to which results should be printed. If ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}\left[0\right]=\text{'}\text{}\text{0}\text{}\text{'}$ then the stdout stream is used.
print_fun – pointer to function
Default $\text{}=\text{}$NULL
On entry: printing function defined by you; the prototype of ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$ is
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{deriv\_check}}=\mathrm{Nag\_TRUE}$ a check of the derivatives defined by lsqfun will be made at the starting point x. The derivative check is carried out by a call to e04yac. A starting point of $x=0$ or $x=1$ should be avoided if this test is to be meaningful, but if either of these starting points is necessary then e04yac should be used to check lsqfun at a different point prior to calling e04gbc.
On entry: the accuracy in $x$ to which the solution is required. If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position prior to a normal exit, is such that
where $\Vert y\Vert =\sqrt{{\sum}_{j=1}^{n}{y}_{j}^{2}}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than $1.0$ in modulus and if ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}=1.0\times {10}^{\mathrm{-5}}$, then ${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see Section 7.) If $F\left(x\right)$ and the variables are scaled roughly as described in Section 9 and $\epsilon $ is the machine precision, then a setting of order ${\mathbf{options}}\mathbf{.}{\mathbf{optim\_tol}}=\sqrt{\epsilon}$ will usually be appropriate.
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ specifies whether the linear minimizations (i.e., minimizations of $F({x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)})$ with respect to ${\alpha}^{\left(k\right)}$) are to be performed by a function which just requires the evaluation of the ${f}_{i}\left(x\right)$, $\mathrm{Nag\_Lin\_NoDeriv}$, or by a function which also requires the first derivatives of the ${f}_{i}\left(x\right)$, $\mathrm{Nag\_Lin\_Deriv}$.
It will often be possible to evaluate the first derivatives of the residuals in about the same amount of computer time that is required for the evaluation of the residuals themselves – if this is so then e04gbc should be called with ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ set to $\mathrm{Nag\_Lin\_Deriv}$. However, if the evaluation of the derivatives takes more than about four times as long as the evaluation of the residuals, then a setting of $\mathrm{Nag\_Lin\_NoDeriv}$ will usually be preferable. If in doubt, use the default setting $\mathrm{Nag\_Lin\_Deriv}$ as it is slightly more robust.
Constraint:
${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag\_Lin\_Deriv}$ or $\mathrm{Nag\_Lin\_NoDeriv}$.
If ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag\_Lin\_NoDeriv}$ then the default value of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}$ will be changed from $0.9$ to $0.5$ if ${\mathbf{n}}>1$.
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}$ specifies how accurately the linear minimizations are to be performed.
Every iteration of e04gbc involves a linear minimization, i.e., minimization of $F({x}^{\left(k\right)}+{\alpha}^{\left(k\right)}{p}^{\left(k\right)})$ with respect to ${\alpha}^{\left(k\right)}$. The minimum with respect to ${\alpha}^{\left(k\right)}$ will be located more accurately for small values of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch\_tol}}$ (say 0.01) than for large values (say 0.9). Although accurate linear minimizations will generally reduce the number of iterations performed by e04gbc, they will increase the number of calls of lsqfun made each iteration. On balance it is usually more efficient to perform a low accuracy minimization.
On entry: an estimate of the Euclidean distance between the solution and the starting point supplied. (For maximum efficiency, a slight overestimate is preferable.) e04gbc will ensure that, for each iteration,
where $k$ is the iteration number. Thus, if the problem has more than one solution, e04gbc is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of $F\left(x\right)$. However, an underestimate of ${\mathbf{options}}\mathbf{.}{\mathbf{step\_max}}$ can lead to inefficiency.
On entry: n values of memory will be automatically allocated by e04gbc and this is the recommended method of use of ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$. However, you may supply memory from the calling program.
On exit: the singular values of the Jacobian matrix at the final point. Thus ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ may be useful as information about the structure of your problem.
On entry: ${\mathbf{n}}\times {\mathbf{n}}$ values of memory will be automatically allocated by e04gbc and this is the recommended method of use of ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. However, you may supply memory from the calling program.
On exit: the matrix $V$ associated with the singular value decomposition
$$J={USV}^{\mathrm{T}}$$
of the Jacobian matrix at the final point, stored by rows. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of ${J}^{\mathrm{T}}J$.
tdv – Integer
Default $\text{}={\mathbf{n}}$
On entry: if memory is supplied then ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}$ must contain the last dimension of the array assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}$ as declared in the function from which e04gbc is called.
On exit: the trailing dimension used by ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. If the NAG default memory allocation has been used this value will be n.
On exit: the grade of the Jacobian at the final point. e04gbc estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see Gill and Murray (1978)); this estimate is called the grade.
iter – Integer
On exit: the number of iterations which have been performed in e04gbc.
nf – Integer
On exit: the number of times the residuals have been evaluated (i.e., the number of calls of lsqfun).
11.3Description of Printed Output
The level of printed output can be controlled with the structure members ${\mathbf{options}}\mathbf{.}{\mathbf{list}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$ (see Section 11.2). If ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag\_TRUE}$ then the argument values to e04gbc are listed, whereas the printout of results is governed by the value of ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$. The default of ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_Soln\_Iter}$ provides a single line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from e04gbc.
When ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_Iter}$ or $\mathrm{Nag\_Soln\_Iter}$ a single line of output is produced on completion of each iteration, this gives the following values:
the value of the objective function, $F\left({x}^{\left(k\right)}\right)$.
Norm g
the Euclidean norm of the gradient of $F\left({x}^{\left(k\right)}\right)$.
Norm x
the Euclidean norm of ${x}^{\left(k\right)}$.
Norm(x(k-1)-x(k))
the Euclidean norm of ${x}^{(k-1)}-{x}^{\left(k\right)}$.
Step
the step ${\alpha}^{\left(k\right)}$ taken along the computed search direction ${p}^{\left(k\right)}$.
When ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_Soln\_Iter\_Full}$ more detailed results are given at each iteration. Additional values output are:
Grade
the grade of the Jacobian matrix. (See description of ${\mathbf{options}}\mathbf{.}{\mathbf{grade}}$, Section 9.)
x
the current point ${x}^{\left(k\right)}$.
g
the current gradient of $F\left({x}^{\left(k\right)}\right)$.
Singular values
the singular values of the current approximation to the Jacobian matrix.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_Soln}$, $\mathrm{Nag\_Soln\_Iter}$ or $\mathrm{Nag\_Soln\_Iter\_Full}$ the final result consists of:
x
the final point ${x}^{*}$.
g
the gradient of $F$ at the final point.
Residuals
the values of the residuals ${f}_{i}$ at the final point.
Sum of squares
the value of $F\left({x}^{*}\right)$, the sum of squares of the residuals at the final point.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}=\mathrm{Nag\_NoPrint}$ then printout will be suppressed; you can print the final solution when e04gbc returns to the calling program.
11.3.1Output of results via a user-defined printing function
You may also specify your own print function for output of iteration results and the final solution by use of the ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$ function pointer, which has prototype
void (*print_fun)(const Nag_Search State *st, Nag_Comm *comm);
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$ this will be called in preference to the internal print function of e04gbc. Calls to the user-defined function are again controlled by means of the ${\mathbf{options}}\mathbf{.}{\mathbf{print\_level}}$ member. Information is provided through st and comm, the two structure arguments to ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$. The structure member $\mathbf{comm}\mathbf{\to}\mathbf{it\_prt}$ is relevant in this context. If $\mathbf{comm}\mathbf{\to}\mathbf{it\_prt}=\mathrm{Nag\_TRUE}$ then the results from the last iteration of e04gbc are in the following members of st:
m – Integer
The number of residuals.
n – Integer
The number of variables.
x – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{n}$ memory locations holding the current point ${x}^{\left(k\right)}$.
fvec – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{m}$ memory locations holding the values of the residuals ${f}_{i}$ at the current point ${x}^{\left(k\right)}$.
fjac – double *
Points to $\mathbf{st}\mathbf{\to}\mathbf{m}\times \mathbf{st}\mathbf{\to}\mathbf{tdfjac}$ memory locations. $\mathbf{st}\mathbf{\to}\mathbf{fjac}\left[(\mathit{i}-1)\times \mathbf{st}\mathbf{\to}\mathbf{tdfjac}+(\mathit{j}-1)\right]$ contains the value of $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$ at the current point ${x}^{\left(k\right)}$.
tdfjac – Integer
The trailing dimension for $\mathbf{st}\mathbf{\to}\mathbf{fjac}\left[\right]$.
step – double
The step ${\alpha}^{\left(k\right)}$ taken along the search direction ${p}^{\left(k\right)}$.
xk_norm – double
The Euclidean norm of ${x}^{(k-1)}-{x}^{\left(k\right)}$.
g – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{n}$ memory locations holding the gradient of $F$ at the current point ${x}^{\left(k\right)}$.
grade – Integer
The grade of the Jacobian matrix.
s – double *
Points to the $\mathbf{st}\mathbf{\to}\mathbf{n}$ memory locations holding the singular values of the current Jacobian.
iter – Integer
The number of iterations, $k$, performed by e04gbc.
Will be Nag_TRUE when the print function is called with the result of the current iteration.
sol_prt – Nag_Boolean
Will be Nag_TRUE when the print function is called with the final result.
user – double *
iuser – Integer *
p – Pointer
Pointers for communication of user information. If used they must be allocated memory either before entry to e04gbc or during a call to lsqfun or ${\mathbf{options}}\mathbf{.}{\mathbf{print\_fun}}$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.