The function may be called by the names: e01aec, nag_interp_dim1_cheb or nag_1d_cheb_interp.
3Description
Let distinct values of an independent variable be given, with , for . For each value , suppose that the value of the dependent variable together with the first derivatives of with respect to are given. Each must, therefore, be a non-negative integer, with the total number of interpolating conditions, , equal to .
e01aec calculates the unique polynomial of degree (or less) which is such that , for and . Here means . This polynomial is represented in Chebyshev series form in the normalized variable , as follows:
where
so that for in the interval to , and where is the Chebyshev polynomial of the first kind of degree with argument .
(The polynomial interpolant can subsequently be evaluated for any value of in the given range by using e02akc. Chebyshev series representations of the derivative(s) and integral(s) of may be obtained by (repeated) use of e02ahcande02ajc.)
The method used consists first of constructing a divided-difference table from the normalized values and the given values of and its derivatives with respect to . The Newton form of is then obtained from this table, as described in Huddleston (1974) and Krogh (1970), with the modification described in Section 9.2. The Newton form of the polynomial is then converted to Chebyshev series form as described in Section 9.3.
Since the errors incurred by these stages can be considerable, a form of iterative refinement is used to improve the solution. This refinement is particularly useful when derivatives of rather high order are given in the data. In reasonable examples, the refinement will usually terminate with a certain accuracy criterion satisfied by the polynomial (see Section 7). In more difficult examples, the criterion may not be satisfied and refinement will continue until the maximum number of iterations (as specified by the input argument itmax) is reached.
In extreme examples, the iterative process may diverge (even though the accuracy criterion is satisfied): if a certain divergence criterion is satisfied, the process terminates at once. In all cases the function returns the ‘best’ polynomial achieved before termination. For the definition of ‘best’ and details of iterative refinement and termination criteria, see Section 9.4.
4References
Huddleston R E (1974) CDC 6600 routines for the interpolation of data and of data with derivatives SLL-74-0214 Sandia Laboratories (Reprint)
Krogh F T (1970) Efficient algorithms for polynomial interpolation and numerical differentiation Math. Comput.24 185–190
5Arguments
1: – IntegerInput
On entry: , the number of given values of the independent variable .
Constraint:
.
2: – doubleInput
3: – doubleInput
On entry: the lower and upper end points, respectively, of the interval . If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the .
Constraint:
.
4: – const doubleInput
On entry: must be set to the value of , for . The need not be ordered.
Constraint:
, and the must be distinct.
5: – const doubleInput
Note: the dimension, dim, of the array y
must be at least
.
On entry: the given values of the dependent variable, and derivatives, as follows:
The first elements contain in that order.
The next elements contain in that order.
The last elements contain in that order.
6: – const IntegerInput
On entry: must be set to , the order of the highest-order derivative whose value is given at , for . If the value of only is given for some then the corresponding value of must be zero.
Constraint:
, for .
7: – IntegerInput
8: – IntegerInput
On entry: respectively the minimum and maximum number of iterations to be performed by the function (for full details see Section 9.4). Setting itmin and/or itmax negative or zero invokes default value(s) of and/or , respectively.
The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values.
Suggested value:
and .
9: – doubleOutput
Note: the dimension, dim, of the array a
must be at least
.
On exit: contains the coefficient in the Chebyshev series representation of , for .
10: – doubleOutput
Note: the dimension, dim, of the array perf
must be at least
.
On exit: , for , contains the ratio of , the performance index relating to the th derivative of the finally provided, to times the machine precision.
, for , contains the th residual, i.e., the value of , where and are the appropriate values corresponding to the th element in the array y (see the description of y in Section 5).
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).
6Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.
NE_BAD_PARAM
On entry, argument had an illegal value.
NE_INT
On entry, .
Constraint: .
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_REAL_2
On entry, and .
Constraint: .
NE_REAL_ARRAY
On entry, , and .
Constraint: .
On entry, , , and .
Constraint: .
7Accuracy
A complete error analysis is not currently available, but the method gives good results for reasonable problems.
It is important to realise that for some sets of data, the polynomial interpolation problem is ill-conditioned. That is, a small perturbation in the data may induce large changes in the polynomial, even in exact arithmetic. Though by no means the worst example, interpolation by a single polynomial to a large number of function values given at points equally spaced across the range is notoriously ill-conditioned and the polynomial interpolating such a dataset is prone to exhibit enormous oscillations between the data points, especially near the ends of the range. These will be reflected in the Chebyshev coefficients being large compared with the given function values. A more familiar example of ill-conditioning occurs in the solution of certain systems of linear algebraic equations, in which a small change in the elements of the matrix and/or in the components of the right-hand side vector induces a relatively large change in the solution vector. The best that can be achieved in these cases is to make the residual vector small in some sense. If this is possible, the computed solution is exact for a slightly perturbed set of data. Similar considerations apply to the interpolation problem.
The residuals are available for inspection
.
To assess whether these are reasonable, however, it is necessary to relate them to the largest function and derivative values taken by over the interval . The following performance indices aim to do this. Let the th derivative of with respect to the normalized variable be given by the Chebyshev series
Let denote the sum of the moduli of these coefficients (this is an upper bound on the th derivative in the interval and is taken as a measure of the maximum size of this derivative), and define
Then if the root-mean-square value of the residuals of , scaled so as to relate to the normalized variable , is denoted by , the performance indices are defined by
It is expected that, in reasonable cases, they will all be less than (say) times the machine precision (this is the accuracy criterion mentioned in Section 3), and in many cases will be of the order of machine precision or less.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
e01aec is not threaded in any implementation.
9Further Comments
9.1Timing
Computation time is approximately proportional to , where is the number of iterations actually used.
9.2Divided-difference Strategy
In constructing each new coefficient in the Newton form of the polynomial, a new must be brought into the computation. The chosen is that which yields the smallest new coefficient. This strategy increases the stability of the divided-difference technique, sometimes quite markedly, by reducing errors due to cancellation.
9.3Conversion to Chebyshev Form
Conversion from the Newton form to Chebyshev series form is effected by evaluating the former at the values of at which takes the value , and then interpolating these function values by a call of e02afc, which provides the Chebyshev series representation of the polynomial with very small additional relative error.
9.4Iterative Refinement
The iterative refinement process is performed as follows.
Firstly, an initial approximation, say, is found by the technique described in Section 3. The th step of the refinement process then consists of evaluating the residuals of the th approximation , and constructing an interpolant, , to these residuals. The next approximation to the interpolating polynomial is then obtained as
This completes the description of the th step.
The iterative process is terminated according to the following criteria. When a polynomial is found whose performance indices (as defined in Section 7) are all less than times the machine precision, the process terminates after itmin further iterations (or after a total of itmax iterations if that occurs earlier). This will occur in most reasonable problems. The extra iterations are to allow for the possibility of further improvement. If no such polynomial is found, the process terminates after a total of itmax iterations. Both these criteria are over-ridden, however, in two special cases. Firstly, if for some value of the sum of the moduli of the Chebyshev coefficients of is greater than that of , it is concluded that the process is diverging and the process is terminated at once ( is not computed).
Secondly, if at any stage, the performance indices are all computed as zero, again the process is terminated at once.
As the iterations proceed, a record is kept of the best polynomial. Subsequently, at the end of each iteration, the new polynomial replaces the current best polynomial if it satisfies two conditions (otherwise the best polynomial remains unchanged). The first condition is that at least one of its root-mean-square residual values, (see Section 7) is smaller than the corresponding value for the current best polynomial. The second condition takes two different forms according to whether or not the performance indices (see Section 7) of the current best polynomial are all less than times the machine precision. If they are, then the largest performance index of the new polynomial is required to be less than that of the current best polynomial. If they are not, the number of indices which are less than times the machine precision must not be smaller than for the current best polynomial. When the iterative process is terminated, it is the polynomial then recorded as best, which is returned to you as .
10Example
This example constructs an interpolant to the following data:
The coefficients in the Chebyshev series representation of are printed, and also the residuals corresponding to each of the given function and derivative values.
This program is written in a generalized form which can read any number of data-sets.