e04ky is an easy-to-use quasi-Newton algorithm for finding a minimum of a function , subject to fixed upper and lower bounds on the independent variables , when first derivatives of are available.
It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).
Syntax
C# |
---|
public static void e04ky( int n, int ibound, E04..::..E04KY_FUNCT2 funct2, double[] bl, double[] bu, double[] x, out double f, double[] g, int[] iw, double[] w, out int ifail ) |
Visual Basic |
---|
Public Shared Sub e04ky ( _ n As Integer, _ ibound As Integer, _ funct2 As E04..::..E04KY_FUNCT2, _ bl As Double(), _ bu As Double(), _ x As Double(), _ <OutAttribute> ByRef f As Double, _ g As Double(), _ iw As Integer(), _ w As Double(), _ <OutAttribute> ByRef ifail As Integer _ ) |
Visual C++ |
---|
public: static void e04ky( int n, int ibound, E04..::..E04KY_FUNCT2^ funct2, array<double>^ bl, array<double>^ bu, array<double>^ x, [OutAttribute] double% f, array<double>^ g, array<int>^ iw, array<double>^ w, [OutAttribute] int% ifail ) |
F# |
---|
static member e04ky : n : int * ibound : int * funct2 : E04..::..E04KY_FUNCT2 * bl : float[] * bu : float[] * x : float[] * f : float byref * g : float[] * iw : int[] * w : float[] * ifail : int byref -> unit |
Parameters
- n
- Type: System..::..Int32On entry: the number of independent variables.Constraint: .
- ibound
- Type: System..::..Int32On entry: indicates whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values:
- If you are supplying all the and individually.
- If there are no bounds on any .
- If all the bounds are of the form .
- If and .
Constraint: .
- funct2
- Type: NagLibrary..::..E04..::..E04KY_FUNCT2You must supply funct2 to calculate the values of the function and its first derivative at any point . It should be tested separately before being used in conjunction with e04ky (see the E04 class).
A delegate of type E04KY_FUNCT2.
- bl
- Type: array<System..::..Double>[]()[][]An array of size [n]On entry: the lower bounds .If ibound is set to , you must set to , for . (If a lower bound is not specified for a particular , the corresponding should be set to .)On exit: the lower bounds actually used by e04ky.
- bu
- Type: array<System..::..Double>[]()[][]An array of size [n]On entry: the upper bounds .If ibound is set to , you must set to , for . (If an upper bound is not specified for a particular , the corresponding should be set to .)On exit: the upper bounds actually used by e04ky.
- x
- Type: array<System..::..Double>[]()[][]An array of size [n]On entry: must be set to a guess at the th component of the position of the minimum, for . The method checks the gradient at the starting point, and is more likely to detect any error in your programming if the initial are nonzero and mutually distinct.On exit: the lowest point found during the calculations. Thus, if on exit, is the th component of the position of the minimum.
- f
- Type: System..::..Double%On exit: the value of corresponding to the final point stored in x.
- g
- Type: array<System..::..Double>[]()[][]An array of size [n]On exit: the value of corresponding to the final point stored in x, for ; the value of for variables not on a bound should normally be close to zero.
- iw
- Type: array<System..::..Int32>[]()[][]An array of size [liw]On exit: if , or , the first n elements of iw contain information about which variables are currently on their bounds and which are free. Specifically, if is:
– fixed on its upper bound, is ; – fixed on its lower bound, is ; – effectively a constant (i.e., ), is ; – free, gives its position in the sequence of free variables. In addition, contains the number of free variables (i.e., ). The rest of the array is used as workspace.
- w
- Type: array<System..::..Double>[]()[][]An array of size [lw]On exit: if , or , contains the th element of the projected gradient vector , for . In addition, contains an estimate of the condition number of the projected Hessian matrix (i.e., ). The rest of the array is used as workspace.
- ifail
- Type: System..::..Int32%On exit: unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).
Description
e04ky is applicable to problems of the form:
when first derivatives are available.
Special provision is made for problems which actually have no bounds on the , problems which have only non-negativity bounds, and problems in which and . You must supply a method to calculate the values of and its first derivatives at any point .
From a starting point you supplied there is generated, on the basis of estimates of the curvature of , a sequence of feasible points which is intended to converge to a local minimum of the constrained function. An attempt is made to verify that the final point is a minimum.
A typical iteration starts at the current point where (say) variables are free from both their bounds. The projected gradient vector , whose elements are the derivatives of with respect to the free variables, is known. A unit lower triangular matrix and a diagonal matrix (both of dimension ), such that is a positive definite approximation of the matrix of second derivatives with respect to the free variables (i.e., the projected Hessian) are also held. The equations
are solved to give a search direction , which is expanded to an -vector by an insertion of appropriate zero elements. Then is found such that is approximately a minimum (subject to the fixed bounds) with respect to ; is replaced by , and the matrices and are updated so as to be consistent with the change produced in the gradient by the step . If any variable actually reaches a bound during the search along , it is fixed and is reduced for the next iteration.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all the active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., is increased). Otherwise minimization continues in the current subspace provided that this is practicable. When it is not, or when the stronger convergence criteria are already satisfied, then, if one or more Lagrange multiplier estimates are close to zero, a slight perturbation is made in the values of the corresponding variables in turn until a lower function value is obtained. The normal algorithm is then resumed from the perturbed point.
If a saddle point is suspected, a local search is carried out with a view to moving away from the saddle point. A local search is also performed when a point is found which is thought to be a constrained minimum.
References
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory
Error Indicators and Warnings
Note: e04ky may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the method:
On entry, , or , or , or and for some , or and , or , or .
- There have been function evaluations, yet the algorithm does not seem to be converging. The calculations can be restarted from the final point held in x. The error may also indicate that has no minimum.
- The conditions for a minimum have not all been met but a lower point could not be found and the algorithm has failed.
- An overflow has occurred during the computation. This is an unlikely failure, but if it occurs you should restart at the latest point given in x.
- There is some doubt about whether the point found by e04ky is a minimum. The degree of confidence in the result decreases as ifail increases. Thus, when it is probable that the final gives a good estimate of the position of a minimum, but when it is very unlikely that the method has found a minimum.
- In the search for a minimum, the modulus of one of the variables has become very large . This indicates that there is a mistake in funct2, that your problem has no finite solution, or that the problem needs rescaling (see [Further Comments]).
- It is very likely that you have made an error in forming the gradient.
If you are dissatisfied with the result (e.g., because , , or ), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. If persistent trouble occurs it may be advisable to try e04kz.
Accuracy
A successful exit () is made from e04ky when (B1, B2 and B3) or B4 hold, and the local search confirms a minimum, where
- .
If , then the vector in x on exit, , is almost certainly an estimate of the position of the minimum, , to the accuracy specified by .
If or , may still be a good estimate of , but the following checks should be made. Let denote an estimate of the condition number of the projected Hessian matrix at . (The value of is returned in ). If
then it is almost certain that is a close approximation to the position of a minimum. When (ii) is true, then usually is a close approximation to .
(i) | the sequence converges to at a superlinear or a fast linear rate, |
(ii) | and |
(iii) | , |
When a successful exit is made then, for a computer with a mantissa of decimals, one would expect to get about decimals accuracy in , and about decimals accuracy in , provided the problem is reasonably well scaled.
Parallelism and Performance
None.
Further Comments
The number of iterations required depends on the number of variables, the behaviour of and the distance of the starting point from the solution. The number of operations performed in an iteration of e04ky is roughly proportional to . In addition, each iteration makes at least one call of funct2. So, unless and the gradient vector can be evaluated very quickly, the run time will be dominated by the time spent in funct2.
Ideally the problem should be scaled so that at the solution the value of and the corresponding values of are each in the range , and so that at points a unit distance away from the solution, is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that e04ky will take less computer time.
Example
A program to minimize
subject to
starting from the initial guess .