The routine may be called by the names e01aef or nagf_interp_dim1_cheb.
3Description
Let distinct values of an independent variable be given, with , for . For each value , suppose that the value of the dependent variable together with the first derivatives of with respect to are given. Each must, therefore, be a non-negative integer, with the total number of interpolating conditions, , equal to .
e01aef calculates the unique polynomial of degree (or less) which is such that , for and . Here means . This polynomial is represented in Chebyshev series form in the normalized variable , as follows:
where
so that for in the interval to , and where is the Chebyshev polynomial of the first kind of degree with argument .
(The polynomial interpolant can subsequently be evaluated for any value of in the given range by using e02akf. Chebyshev series representations of the derivative(s) and integral(s) of may be obtained by (repeated) use of e02ahfande02ajf.)
The method used consists first of constructing a divided-difference table from the normalized values and the given values of and its derivatives with respect to . The Newton form of is then obtained from this table, as described in Huddleston (1974) and Krogh (1970), with the modification described in Section 9.2. The Newton form of the polynomial is then converted to Chebyshev series form as described in Section 9.3.
Since the errors incurred by these stages can be considerable, a form of iterative refinement is used to improve the solution. This refinement is particularly useful when derivatives of rather high order are given in the data. In reasonable examples, the refinement will usually terminate with a certain accuracy criterion satisfied by the polynomial (see Section 7). In more difficult examples, the criterion may not be satisfied and refinement will continue until the maximum number of iterations (as specified by the input argument itmax) is reached.
In extreme examples, the iterative process may diverge (even though the accuracy criterion is satisfied): if a certain divergence criterion is satisfied, the process terminates at once. In all cases the routine returns the ‘best’ polynomial achieved before termination. For the definition of ‘best’ and details of iterative refinement and termination criteria, see Section 9.4.
4References
Huddleston R E (1974) CDC 6600 routines for the interpolation of data and of data with derivatives SLL-74-0214 Sandia Laboratories (Reprint)
Krogh F T (1970) Efficient algorithms for polynomial interpolation and numerical differentiation Math. Comput.24 185–190
5Arguments
1: – IntegerInput
On entry: , the number of given values of the independent variable .
Constraint:
.
2: – Real (Kind=nag_wp)Input
3: – Real (Kind=nag_wp)Input
On entry: the lower and upper end points, respectively, of the interval . If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the .
Constraint:
.
4: – Real (Kind=nag_wp) arrayInput
On entry: the value of , for . The need not be ordered.
Constraint:
, and the must be distinct.
5: – Real (Kind=nag_wp) arrayInput
On entry: the given values of the dependent variable, and derivatives, as follows:
The first elements contain in that order.
The next elements contain in that order.
The last elements contain in that order.
6: – Integer arrayInput
On entry: , the order of the highest-order derivative whose value is given at , for . If the value of only is given for some then the corresponding value of must be zero.
Constraint:
, for .
7: – IntegerInput
On entry: , the total number of interpolating conditions.
Constraint:
.
8: – IntegerInput
9: – IntegerInput
On entry: respectively the minimum and maximum number of iterations to be performed by the routine (for full details see Section 9.4). Setting itmin and/or itmax negative or zero invokes default value(s) of and/or , respectively.
The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values.
Suggested value:
and .
10: – Real (Kind=nag_wp) arrayOutput
On exit: contains the coefficient in the Chebyshev series representation of , for .
11: – Real (Kind=nag_wp) arrayOutput
On exit: used as workspace, but see also Section 9.5.
12: – IntegerInput
On entry: the dimension of the array wrk as declared in the (sub)program from which e01aef is called.
Constraint:
, where is the largest element of , for .
13: – Integer arrayOutput
On exit: used as workspace, but see also Section 9.5.
14: – IntegerInput
On entry: the dimension of the array iwrk as declared in the (sub)program from which e01aef is called.
Constraint:
.
15: – IntegerInput/Output
On entry: ifail must be set to , or to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of means that an error message is printed while a value of means that it is not.
If halting is not appropriate, the value or is recommended. If message printing is undesirable, then the value is recommended. Otherwise, the value is recommended. When the value or is used it is essential to test the value of ifail on exit.
On exit: unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry or , explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
On entry, liwrk is too small. . Minimum possible dimension: .
On entry, lwrk is too small. . Minimum possible dimension: .
On entry, .
Constraint: .
On entry, and .
Constraint: .
On entry, and .
Constraint: .
On entry, , and .
Constraint: .
On entry, , , and .
Constraint: .
On entry, and .
Constraint: .
Not all the performance indices are less than eight times the machine precision, although itmax iterations have been performed. Arguments a, wrk and iwrk relate to the best polynomial determined. A more accurate solution may possibly be obtained by increasing itmax and recalling the routine. See also Sections 7, 9.4 and 9.5.
The computation has been terminated because the iterative process appears to be diverging. (Arguments a, wrk and iwrk relate to the best polynomial determined.) Thus the problem specified by your data is probably too ill-conditioned for the solution to be satisfactory. This may result from some of the being very close together, or from the number of interpolating conditions,n, being large. If in such cases the conditions do not involve derivatives, you are likely to obtain a much more satisfactory solution to your problem either by cubic spline interpolation (see e01baf) or by curve-fitting with a polynomial or spline in which the number of coefficients is less than n, preferably much less if n is large (see Chapter E02). But see Sections 7, 9.4 and 9.5.
An unexpected error has been triggered by this routine. Please
contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
A complete error analysis is not currently available, but the method gives good results for reasonable problems.
It is important to realise that for some sets of data, the polynomial interpolation problem is ill-conditioned. That is, a small perturbation in the data may induce large changes in the polynomial, even in exact arithmetic. Though by no means the worst example, interpolation by a single polynomial to a large number of function values given at points equally spaced across the range is notoriously ill-conditioned and the polynomial interpolating such a dataset is prone to exhibit enormous oscillations between the data points, especially near the ends of the range. These will be reflected in the Chebyshev coefficients being large compared with the given function values. A more familiar example of ill-conditioning occurs in the solution of certain systems of linear algebraic equations, in which a small change in the elements of the matrix and/or in the components of the right-hand side vector induces a relatively large change in the solution vector. The best that can be achieved in these cases is to make the residual vector small in some sense. If this is possible, the computed solution is exact for a slightly perturbed set of data. Similar considerations apply to the interpolation problem.
The residuals are available for inspection
(see Section 9.5).
To assess whether these are reasonable, however, it is necessary to relate them to the largest function and derivative values taken by over the interval . The following performance indices aim to do this. Let the th derivative of with respect to the normalized variable be given by the Chebyshev series
Let denote the sum of the moduli of these coefficients (this is an upper bound on the th derivative in the interval and is taken as a measure of the maximum size of this derivative), and define
Then if the root-mean-square value of the residuals of , scaled so as to relate to the normalized variable , is denoted by , the performance indices are defined by
It is expected that, in reasonable cases, they will all be less than (say) times the machine precision (this is the accuracy criterion mentioned in Section 3), and in many cases will be of the order of machine precision or less.
8Parallelism and Performance
e01aef is not threaded in any implementation.
9Further Comments
9.1Timing
Computation time is approximately proportional to , where is the number of iterations actually used.
(See Section 9.5.)
9.2Divided-difference Strategy
In constructing each new coefficient in the Newton form of the polynomial, a new must be brought into the computation. The chosen is that which yields the smallest new coefficient. This strategy increases the stability of the divided-difference technique, sometimes quite markedly, by reducing errors due to cancellation.
9.3Conversion to Chebyshev Form
Conversion from the Newton form to Chebyshev series form is effected by evaluating the former at the values of at which takes the value , and then interpolating these function values by a call of e02aff, which provides the Chebyshev series representation of the polynomial with very small additional relative error.
9.4Iterative Refinement
The iterative refinement process is performed as follows.
Firstly, an initial approximation, say, is found by the technique described in Section 3. The th step of the refinement process then consists of evaluating the residuals of the th approximation , and constructing an interpolant, , to these residuals. The next approximation to the interpolating polynomial is then obtained as
This completes the description of the th step.
The iterative process is terminated according to the following criteria. When a polynomial is found whose performance indices (as defined in Section 7) are all less than times the machine precision, the process terminates after itmin further iterations (or after a total of itmax iterations if that occurs earlier). This will occur in most reasonable problems. The extra iterations are to allow for the possibility of further improvement. If no such polynomial is found, the process terminates after a total of itmax iterations. Both these criteria are over-ridden, however, in two special cases. Firstly, if for some value of the sum of the moduli of the Chebyshev coefficients of is greater than that of , it is concluded that the process is diverging and the process is terminated at once ( is not computed).
Secondly, if at any stage, the performance indices are all computed as zero, again the process is terminated at once.
As the iterations proceed, a record is kept of the best polynomial. Subsequently, at the end of each iteration, the new polynomial replaces the current best polynomial if it satisfies two conditions (otherwise the best polynomial remains unchanged). The first condition is that at least one of its root-mean-square residual values, (see Section 7) is smaller than the corresponding value for the current best polynomial. The second condition takes two different forms according to whether or not the performance indices (see Section 7) of the current best polynomial are all less than times the machine precision. If they are, then the largest performance index of the new polynomial is required to be less than that of the current best polynomial. If they are not, the number of indices which are less than times the machine precision must not be smaller than for the current best polynomial. When the iterative process is terminated, it is the polynomial then recorded as best, which is returned to you as .
9.5Workspace Information
On successful exit, and also if or on exit, the following information is contained in the workspace arrays wrk and iwrk:
, for where , contains the ratio of , the performance index relating to the th derivative of the finally provided, to times the machine precision.
, for , contains the th residual, i.e., the value of , where and are the appropriate values corresponding to the th element in the array y (see the description of y in Section 5).
contains the number of iterations actually performed in deriving .
If, on exit, or , the finally provided may still be adequate for your requirements. To assess this you should examine the residuals contained in , for , to see whether they are acceptably small.
10Example
This example constructs an interpolant to the following data:
The coefficients in the Chebyshev series representation of are printed, and also the residuals corresponding to each of the given function and derivative values.
This program is written in a generalized form which can read any number of data-sets.