naginterfaces.library.opt.uncon_​conjgrd_​comp

naginterfaces.library.opt.uncon_conjgrd_comp(objfun, x, comm, data=None, io_manager=None)[source]

uncon_conjgrd_comp minimizes an unconstrained nonlinear function of several variables using a pre-conditioned, limited memory quasi-Newton conjugate gradient method. First derivatives (or an ‘acceptable’ finite difference approximation to them) are required. It is intended for use on large scale problems.

Note: this function uses optional algorithmic parameters, see also: uncon_conjgrd_option_file(), uncon_conjgrd_option_string(), nlp1_init().

Deprecated since version 27.0.0.0: uncon_conjgrd_comp is deprecated. Please use handle_solve_bounds_foas() instead. See also the Replacement Calls document.

For full information please refer to the NAG Library document for e04dg

https://support.nag.com/numeric/nl/nagdoc_30/flhtml/e04/e04dgf.html

Parameters
objfuncallable (objf, objgrd) = objfun(mode, x, nstate, data=None)

must calculate the objective function and possibly its gradient as well for a specified -element vector .

Parameters
modeint

Indicates which values must be assigned during each call of . Only the following values need be assigned:

.

and .

xfloat, ndarray, shape

, the vector of variables at which the objective function and its gradient are to be evaluated.

nstateint

Will be on the first call of by uncon_conjgrd_comp, and for all subsequent calls. Thus, you may wish to test, within in order to perform certain calculations once only. For example, you may read data or initialize global variables when .

dataarbitrary, optional, modifiable in place

User-communication data for callback functions.

Returns
objffloat

The value of the objective function at .

objgrdfloat, array-like, shape

If , must contain the value of evaluated at , for .

xfloat, array-like, shape

An initial estimate of the solution.

commdict, communication object, modified in place

Communication structure.

This argument must have been initialized by a prior call to nlp1_init().

dataarbitrary, optional

User-communication data for callback functions.

io_managerFileObjManager, optional

Manager for I/O in this routine.

Returns
iteraint

The total number of iterations performed.

objffloat

The value of the objective function at the final iterate.

objgrdfloat, ndarray, shape

The gradient of the objective function at the final iterate (or its finite difference approximation).

xfloat, ndarray, shape

The final estimate of the solution.

Other Parameters
‘Defaults’valueless

This special keyword may be used to reset all options to their default values.

‘Estimated Optimal Function Value’float

This value of specifies the user-supplied guess of the optimum objective function value . This value is used to calculate an initial step length (see Algorithmic Details). If the value of is not specified (the default), then this has the effect of setting to unity. It should be noted that for badly scaled functions a unit step along the steepest descent direction will often compute the objective function at very large values of .

‘Function Precision’float

Default

The argument defines , which is intended to be a measure of the accuracy with which the problem function can be computed. If or , the default value is used.

The value of should reflect the relative precision of ; i.e., acts as a relative precision when is large, and as an absolute precision when is small. For example, if is typically of order and the first six significant digits are known to be correct, an appropriate value for would be . In contrast, if is typically of order and the first six significant digits are known to be correct, an appropriate value for would be . The choice of can be quite complicated for badly scaled problems; see Module 8 of Gill et al. (1981) for a discussion of scaling techniques. The default value is appropriate for most simple functions that are computed with full accuracy. However when the accuracy of the computed function values is known to be significantly worse than full precision, the value of should be large enough so that no attempt will be made to distinguish between function values that differ by less than the error inherent in the calculation.

‘Iteration Limit’int

Default

The value of specifies the maximum number of iterations allowed before termination. If , the default value is used.

Problems whose Hessian matrices at the solution contain sets of clustered eigenvalues are likely to be minimized in significantly fewer than iterations. Problems without this property may require anything between and iterations, with approximately iterations being a common figure for moderately difficult problems.

‘Iters’int

Default

The value of specifies the maximum number of iterations allowed before termination. If , the default value is used.

Problems whose Hessian matrices at the solution contain sets of clustered eigenvalues are likely to be minimized in significantly fewer than iterations. Problems without this property may require anything between and iterations, with approximately iterations being a common figure for moderately difficult problems.

‘Itns’int

Default

The value of specifies the maximum number of iterations allowed before termination. If , the default value is used.

Problems whose Hessian matrices at the solution contain sets of clustered eigenvalues are likely to be minimized in significantly fewer than iterations. Problems without this property may require anything between and iterations, with approximately iterations being a common figure for moderately difficult problems.

‘Linesearch Tolerance’float

Default

The value controls the accuracy with which the step taken during each iteration approximates a minimum of the function along the search direction (the smaller the value of , the more accurate the linesearch). The default value requests an inaccurate search, and is appropriate for most problems. A more accurate search may be appropriate when it is desirable to reduce the number of iterations – for example, if the objective function is cheap to evaluate. If or , the default value is used.

‘List’valueless

Option ‘List’ enables printing of each option specification as it is supplied. ‘Nolist’ suppresses this printing.

‘Nolist’valueless

Default for uncon_conjgrd_comp

Option ‘List’ enables printing of each option specification as it is supplied. ‘Nolist’ suppresses this printing.

‘Maximum Step Length’float

Default

If , the maximum allowable step length for the linesearch is taken as . If , the default value is used.

‘Optimality Tolerance’float

Default

The argument specifies the accuracy to which you wish the final iterate to approximate a solution of the problem. Broadly speaking, indicates the number of correct figures desired in the objective function at the solution. For example, if is and termination occurs with no exception or warning is raised (see Parameters), then the final point satisfies the termination criteria, where represents ‘Optimality Tolerance’. If or , the default value is used. If ‘Optimality Tolerance’ is chosen below a certain threshold, it will automatically be reset to another value.

‘Print Level’int

The value controls the amount of printout produced by uncon_conjgrd_comp, as indicated below. A detailed description of the printout is given in Description of Printed Output (summary output at each iteration and the final solution).

Output

No output.

The final solution only.

One line of summary output ( characters; see Description of Printed Output) for each iteration (no printout of the final solution).

The final solution and one line of summary output for each iteration.

‘Start Objective Check at Variable’int

Default

These keywords take effect only if . They may be used to control the verification of gradient elements computed by . For example, if the first elements of the objective gradient appeared to be correct in an earlier run, so that only element remains questionable, it is reasonable to specify . If the first variables appear linearly in the objective, so that the corresponding gradient elements are constant, the above choice would also be appropriate.

If or , the default value is used. If or , the default value is used.

‘Stop Objective Check at Variable’int

Default

These keywords take effect only if . They may be used to control the verification of gradient elements computed by . For example, if the first elements of the objective gradient appeared to be correct in an earlier run, so that only element remains questionable, it is reasonable to specify . If the first variables appear linearly in the objective, so that the corresponding gradient elements are constant, the above choice would also be appropriate.

If or , the default value is used. If or , the default value is used.

‘Verify Level’int

Default

These keywords refer to finite difference checks on the gradient elements computed by . Gradients are verified at the user-supplied initial estimate of the solution. The possible choices for are as follows:

Meaning

No checks are performed.

Only a ‘cheap’ test will be performed, requiring one call to .

In addition to the ‘cheap’ test, individual gradient elements will also be checked using a reliable (but more expensive) test.

For example, the objective gradient will be verified if ‘Verify’, , ‘Verify Gradients’, ‘Verify Objective Gradients’ or is specified.

‘Verify’valueless

These keywords refer to finite difference checks on the gradient elements computed by . Gradients are verified at the user-supplied initial estimate of the solution. The possible choices for are as follows:

Meaning

No checks are performed.

Only a ‘cheap’ test will be performed, requiring one call to .

In addition to the ‘cheap’ test, individual gradient elements will also be checked using a reliable (but more expensive) test.

For example, the objective gradient will be verified if ‘Verify’, , ‘Verify Gradients’, ‘Verify Objective Gradients’ or is specified.

‘Verify Gradients’valueless

These keywords refer to finite difference checks on the gradient elements computed by . Gradients are verified at the user-supplied initial estimate of the solution. The possible choices for are as follows:

Meaning

No checks are performed.

Only a ‘cheap’ test will be performed, requiring one call to .

In addition to the ‘cheap’ test, individual gradient elements will also be checked using a reliable (but more expensive) test.

For example, the objective gradient will be verified if ‘Verify’, , ‘Verify Gradients’, ‘Verify Objective Gradients’ or is specified.

‘Verify Objective Gradients’valueless

These keywords refer to finite difference checks on the gradient elements computed by . Gradients are verified at the user-supplied initial estimate of the solution. The possible choices for are as follows:

Meaning

No checks are performed.

Only a ‘cheap’ test will be performed, requiring one call to .

In addition to the ‘cheap’ test, individual gradient elements will also be checked using a reliable (but more expensive) test.

For example, the objective gradient will be verified if ‘Verify’, , ‘Verify Gradients’, ‘Verify Objective Gradients’ or is specified.

Raises
NagValueError
(errno )

Gradient at the starting point is too small.

(errno )

On entry, .

Constraint: .

Warns
NagAlgorithmicWarning
(errno )

Current point cannot be improved upon.

(errno )

Large errors found in the derivatives.

NagAlgorithmicMajorWarning
(errno )

Too many iterations.

(errno )

Computed upper bound on step length is too small.

NagCallbackTerminateWarning
(errno )

User requested termination.

Notes

In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility.

uncon_conjgrd_comp is designed to solve unconstrained minimization problems of the form

where is an -element vector.

You must supply an initial estimate of the solution.

For maximum reliability, it is preferable to provide all first partial derivatives. If all of the derivatives cannot be provided, you are recommended to obtain approximate values (using finite differences) by calling estimate_deriv() from within .

The method used by uncon_conjgrd_comp is described in Algorithmic Details.

References

Gill, P E and Murray, W, 1979, Conjugate-gradient methods for large-scale nonlinear optimization, Technical Report SOL 79-15, Department of Operations Research, Stanford University

Gill, P E, Murray, W and Wright, M H, 1981, Practical Optimization, Academic Press