NAG Library Routine Document
e01aef (dim1_cheb)
1
Purpose
e01aef constructs the Chebyshev series representation of a polynomial interpolant to a set of data which may contain derivative values.
2
Specification
Fortran Interface
Subroutine e01aef ( 
m, xmin, xmax, x, y, ip, n, itmin, itmax, a, wrk, lwrk, iwrk, liwrk, ifail) 
Integer, Intent (In)  ::  m, ip(m), n, itmin, itmax, lwrk, liwrk  Integer, Intent (Inout)  ::  ifail  Integer, Intent (Out)  ::  iwrk(liwrk)  Real (Kind=nag_wp), Intent (In)  ::  xmin, xmax, x(m), y(n)  Real (Kind=nag_wp), Intent (Out)  ::  a(n), wrk(lwrk) 

C Header Interface
#include nagmk26.h
void 
e01aef_ (const Integer *m, const double *xmin, const double *xmax, const double x[], const double y[], const Integer ip[], const Integer *n, const Integer *itmin, const Integer *itmax, double a[], double wrk[], const Integer *lwrk, Integer iwrk[], const Integer *liwrk, Integer *ifail) 

3
Description
Let $m$ distinct values ${x}_{\mathit{i}}$ of an independent variable $x$ be given, with ${x}_{\mathrm{min}}\le {x}_{\mathit{i}}\le {x}_{\mathrm{max}}$, for $\mathit{i}=1,2,\dots ,m$. For each value ${x}_{i}$, suppose that the value ${y}_{i}$ of the dependent variable $y$ together with the first ${p}_{i}$ derivatives of $y$ with respect to $x$ are given. Each ${p}_{i}$ must therefore be a nonnegative integer, with the total number of interpolating conditions, $n$, equal to $m+{\displaystyle \sum _{i=1}^{m}}{p}_{i}$.
e01aef calculates the unique polynomial
$q\left(x\right)$ of degree
$n1$ (or less) which is such that
${q}^{\left(\mathit{k}\right)}\left({x}_{\mathit{i}}\right)={y}_{\mathit{i}}^{\left(\mathit{k}\right)}$, for
$\mathit{i}=1,2,\dots ,m$ and
$\mathit{k}=0,1,\dots ,{p}_{\mathit{i}}$. Here
${q}^{\left(0\right)}\left({x}_{i}\right)$ means
$q\left({x}_{i}\right)$. This polynomial is represented in Chebyshev series form in the normalized variable
$\stackrel{}{x}$, as follows:
where
so that
$1\le \stackrel{}{x}\le 1$ for
$x$ in the interval
${x}_{\mathrm{min}}$ to
${x}_{\mathrm{max}}$, and where
${T}_{i}\left(\stackrel{}{x}\right)$ is the Chebyshev polynomial of the first kind of degree
$i$ with argument
$\stackrel{}{x}$.
(The polynomial interpolant can subsequently be evaluated for any value of
$x$ in the given range by using
e02akf. Chebyshev series representations of the derivative(s) and integral(s) of
$q\left(x\right)$ may be obtained by (repeated) use of
e02ahf and
e02ajf.)
The method used consists first of constructing a divideddifference table from the normalized
$\stackrel{}{x}$ values and the given values of
$y$ and its derivatives with respect to
$\stackrel{}{x}$. The Newton form of
$q\left(x\right)$ is then obtained from this table, as described in
Huddleston (1974) and
Krogh (1970), with the modification described in
Section 9.2. The Newton form of the polynomial is then converted to Chebyshev series form as described in
Section 9.3.
Since the errors incurred by these stages can be considerable, a form of iterative refinement is used to improve the solution. This refinement is particularly useful when derivatives of rather high order are given in the data. In reasonable examples, the refinement will usually terminate with a certain accuracy criterion satisfied by the polynomial (see
Section 7). In more difficult examples, the criterion may not be satisfied and refinement will continue until the maximum number of iterations (as specified by the input argument
itmax) is reached.
In extreme examples, the iterative process may diverge (even though the accuracy criterion is satisfied): if a certain divergence criterion is satisfied, the process terminates at once. In all cases the routine returns the ‘best’ polynomial achieved before termination. For the definition of ‘best’ and details of iterative refinement and termination criteria, see
Section 9.4.
4
References
Huddleston R E (1974) CDC 6600 routines for the interpolation of data and of data with derivatives SLL740214 Sandia Laboratories (Reprint)
Krogh F T (1970) Efficient algorithms for polynomial interpolation and numerical differentiation Math. Comput. 24 185–190
5
Arguments
 1: $\mathbf{m}$ – IntegerInput

On entry: $m$, the number of given values of the independent variable $x$.
Constraint:
${\mathbf{m}}\ge 1$.
 2: $\mathbf{xmin}$ – Real (Kind=nag_wp)Input
 3: $\mathbf{xmax}$ – Real (Kind=nag_wp)Input

On entry: the lower and upper end points, respectively, of the interval $\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the ${x}_{i}$.
Constraint:
${\mathbf{xmin}}<{\mathbf{xmax}}$.
 4: $\mathbf{x}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the value of ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,m$. The ${\mathbf{x}}\left(i\right)$ need not be ordered.
Constraint:
${\mathbf{xmin}}\le {\mathbf{x}}\left(i\right)\le {\mathbf{xmax}}$, and the ${\mathbf{x}}\left(i\right)$ must be distinct.
 5: $\mathbf{y}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput

On entry: the given values of the dependent variable, and derivatives, as follows:
The first ${p}_{1}+1$ elements contain ${y}_{1},{y}_{1}^{\left(1\right)},\dots ,{y}_{1}^{\left({p}_{1}\right)}$ in that order.
The next ${p}_{2}+1$ elements contain ${y}_{2},{y}_{2}^{\left(1\right)},\dots ,{y}_{2}^{\left({p}_{2}\right)}$ in that order.
$\text{\hspace{1em}}\vdots $
The last ${p}_{m}+1$ elements contain ${y}_{m},{y}_{m}^{\left(1\right)},\dots ,{y}_{m}^{\left({p}_{m}\right)}$ in that order.
 6: $\mathbf{ip}\left({\mathbf{m}}\right)$ – Integer arrayInput

On entry: ${p}_{\mathit{i}}$, the order of the highestorder derivative whose value is given at ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,m$. If the value of $y$ only is given for some ${x}_{i}$ then the corresponding value of ${\mathbf{ip}}\left(i\right)$ must be zero.
Constraint:
${\mathbf{ip}}\left(\mathit{i}\right)\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
 7: $\mathbf{n}$ – IntegerInput

On entry: $n$, the total number of interpolating conditions.
Constraint:
${\mathbf{n}}={\mathbf{m}}+{\displaystyle \sum _{\mathit{i}=1}^{{\mathbf{m}}}}{\mathbf{ip}}\left(\mathit{i}\right)$.
 8: $\mathbf{itmin}$ – IntegerInput
 9: $\mathbf{itmax}$ – IntegerInput

On entry: respectively the minimum and maximum number of iterations to be performed by the routine (for full details see
Section 9.4). Setting
itmin and/or
itmax negative or zero invokes default value(s) of
$2$ and/or
$10$, respectively.
The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values.
Suggested value:
${\mathbf{itmin}}=0$ and ${\mathbf{itmax}}=0$.
 10: $\mathbf{a}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: ${\mathbf{a}}\left(\mathit{i}+1\right)$ contains the coefficient ${a}_{\mathit{i}}$ in the Chebyshev series representation of $q\left(x\right)$, for $\mathit{i}=0,1,\dots ,n1$.
 11: $\mathbf{wrk}\left({\mathbf{lwrk}}\right)$ – Real (Kind=nag_wp) arrayOutput

On exit: used as workspace, but see also
Section 9.5.
 12: $\mathbf{lwrk}$ – IntegerInput

On entry: the dimension of the array
wrk as declared in the (sub)program from which
e01aef is called.
Constraint:
${\mathbf{lwrk}}\ge 7\times {\mathbf{n}}+5\times \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(\mathit{ipmax},0\right)+{\mathbf{m}}+7$, where $\mathit{ipmax}$ is the largest element of ${\mathbf{ip}}\left(\mathit{i}\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
 13: $\mathbf{iwrk}\left({\mathbf{liwrk}}\right)$ – Integer arrayOutput

On exit: used as workspace, but see also
Section 9.5.
 14: $\mathbf{liwrk}$ – IntegerInput

On entry: the dimension of the array
iwrk as declared in the (sub)program from which
e01aef is called.
Constraint:
${\mathbf{liwrk}}\ge 2\times {\mathbf{m}}+2$.
 15: $\mathbf{ifail}$ – IntegerInput/Output

On entry:
ifail must be set to
$0$,
$1\text{ or}1$. If you are unfamiliar with this argument you should refer to
Section 3.4 in How to Use the NAG Library and its Documentation for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$1\text{ or}1$ is recommended. If the output of error messages is undesirable, then the value
$1$ is recommended. Otherwise, if you are not familiar with this argument, the recommended value is
$0$.
When the value $\mathbf{1}\text{ or}\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit:
${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see
Section 6).
6
Error Indicators and Warnings
If on entry
${\mathbf{ifail}}=0$ or
$1$, explanatory error messages are output on the current error message unit (as defined by
x04aaf).
Errors or warnings detected by the routine:
 ${\mathbf{ifail}}=1$

On entry,  ${\mathbf{m}}<1$, 
or  ${\mathbf{n}}\ne {\mathbf{m}}+{\mathbf{ip}}\left(1\right)+{\mathbf{ip}}\left(2\right)+\cdots +{\mathbf{ip}}\left({\mathbf{m}}\right)$, 
or  ${\mathbf{lwrk}}<7\times {\mathbf{n}}+5\times \mathit{ipmax}+{\mathbf{m}}+7$ (see lwrk for the definition of $\mathit{ipmax}$), 
or  ${\mathbf{liwrk}}<2\times {\mathbf{m}}+2$. 
 ${\mathbf{ifail}}=2$

On entry,  ${\mathbf{ip}}\left(i\right)<0$ for some $i$. 
 ${\mathbf{ifail}}=3$

On entry,  ${\mathbf{xmin}}\ge {\mathbf{xmax}}$, 
or  ${\mathbf{x}}\left(i\right)<{\mathbf{xmin}}$ for some $i$, 
or  ${\mathbf{x}}\left(i\right)>{\mathbf{xmax}}$, 
or  ${\mathbf{x}}\left(i\right)={\mathbf{x}}\left(j\right)$ for some $i\ne j$. 
 ${\mathbf{ifail}}=4$

Not all the performance indices are less than eight times the
machine precision, although
itmax iterations have been performed. Arguments
a,
wrk and
iwrk relate to the best polynomial determined. A more accurate solution may possibly be obtained by increasing
itmax and recalling the routine. See also
Sections 7,
9.4 and
9.5.
 ${\mathbf{ifail}}=5$

The computation has been terminated because the iterative process appears to be diverging. (Arguments
a,
wrk and
iwrk relate to the best polynomial determined.) Thus the problem specified by your data is probably too illconditioned for the solution to be satisfactory. This may result from some of the
${\mathbf{x}}\left(i\right)$ being very close together, or from the number of interpolating conditions,
n, being large. If in such cases the conditions do not involve derivatives, you are likely to obtain a much more satisfactory solution to your problem either by cubic spline interpolation (see
e01baf) or by curvefitting with a polynomial or spline in which the number of coefficients is less than
n, preferably much less if
n is large (see
Chapter E02). But see
Sections 7,
9.4 and
9.5.
 ${\mathbf{ifail}}=99$
An unexpected error has been triggered by this routine. Please
contact
NAG.
See
Section 3.9 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=399$
Your licence key may have expired or may not have been installed correctly.
See
Section 3.8 in How to Use the NAG Library and its Documentation for further information.
 ${\mathbf{ifail}}=999$
Dynamic memory allocation failed.
See
Section 3.7 in How to Use the NAG Library and its Documentation for further information.
7
Accuracy
A complete error analysis is not currently available, but the method gives good results for reasonable problems.
It is important to realise that for some sets of data, the polynomial interpolation problem is illconditioned. That is, a small perturbation in the data may induce large changes in the polynomial, even in exact arithmetic. Though by no means the worst example, interpolation by a single polynomial to a large number of function values given at points equally spaced across the range is notoriously illconditioned and the polynomial interpolating such a dataset is prone to exhibit enormous oscillations between the data points, especially near the ends of the range. These will be reflected in the Chebyshev coefficients being large compared with the given function values. A more familiar example of illconditioning occurs in the solution of certain systems of linear algebraic equations, in which a small change in the elements of the matrix and/or in the components of the righthand side vector induces a relatively large change in the solution vector. The best that can be achieved in these cases is to make the residual vector small in some sense. If this is possible, the computed solution is exact for a slightly perturbed set of data. Similar considerations apply to the interpolation problem.
The residuals
${y}_{i}^{\left(k\right)}{q}^{\left(k\right)}\left({x}_{i}\right)$ are available for inspection
(see
Section 9.5).
To assess whether these are reasonable, however, it is necessary to relate them to the largest function and derivative values taken by
$q\left(x\right)$ over the interval
$\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. The following performance indices aim to do this. Let the
$k$th derivative of
$q$ with respect to the normalized variable
$\stackrel{}{x}$ be given by the Chebyshev series
Let
${A}_{k}$ denote the sum of the moduli of these coefficients (this is an upper bound on the
$k$th derivative in the interval and is taken as a measure of the maximum size of this derivative), and define
Then if the rootmeansquare value of the residuals of
${q}^{\left(k\right)}$, scaled so as to relate to the normalized variable
$\stackrel{}{x}$, is denoted by
${r}_{k}$, the performance indices are defined by
It is expected that, in reasonable cases, they will all be less than (say)
$8$ times the
machine precision (this is the accuracy criterion mentioned in
Section 3), and in many cases will be of the order of
machine precision or less.
8
Parallelism and Performance
e01aef is not threaded in any implementation.
Computation time is approximately proportional to
$\mathit{it}\times {n}^{3}$, where
$\mathit{it}$ is the number of iterations actually used.
(See
Section 9.5.)
In constructing each new coefficient in the Newton form of the polynomial, a new ${x}_{i}$ must be brought into the computation. The ${x}_{i}$ chosen is that which yields the smallest new coefficient. This strategy increases the stability of the divideddifference technique, sometimes quite markedly, by reducing errors due to cancellation.
Conversion from the Newton form to Chebyshev series form is effected by evaluating the former at the
$n$ values of
$\stackrel{}{x}$ at which
${T}_{n1}\left(x\right)$ takes the value
$\pm 1$, and then interpolating these
$n$ function values by a call of
e02aff, which provides the Chebyshev series representation of the polynomial with very small additional relative error.
The iterative refinement process is performed as follows.
Firstly, an initial approximation,
${q}_{1}\left(x\right)$ say, is found by the technique described in
Section 3. The
$r$th step of the refinement process then consists of evaluating the residuals of the
$r$th approximation
${q}_{r}\left(x\right)$, and constructing an interpolant,
$d{q}_{r}\left(x\right)$, to these residuals. The next approximation
${q}_{r+1}\left(x\right)$ to the interpolating polynomial is then obtained as
This completes the description of the
$r$th step.
The iterative process is terminated according to the following criteria. When a polynomial is found whose performance indices (as defined in
Section 7) are all less than
$8$ times the
machine precision, the process terminates after
itmin further iterations (or after a total of
itmax iterations if that occurs earlier). This will occur in most reasonable problems. The extra iterations are to allow for the possibility of further improvement. If no such polynomial is found, the process terminates after a total of
itmax iterations. Both these criteria are overridden, however, in two special cases. Firstly, if for some value of
$r$ the sum of the moduli of the Chebyshev coefficients of
$d{q}_{r}\left(x\right)$ is greater than that of
${q}_{r}\left(x\right)$, it is concluded that the process is diverging and the process is terminated at once (
${q}_{r+1}\left(x\right)$ is not computed).
Secondly, if at any stage, the performance indices are all computed as zero, again the process is terminated at once.
As the iterations proceed, a record is kept of the best polynomial. Subsequently, at the end of each iteration, the new polynomial replaces the current best polynomial if it satisfies two conditions (otherwise the best polynomial remains unchanged). The first condition is that at least one of its rootmeansquare residual values,
${r}_{k}$ (see
Section 7) is smaller than the corresponding value for the current best polynomial. The second condition takes two different forms according to whether or not the performance indices (see
Section 7) of the current best polynomial are all less than
$8$ times the
machine precision. If they are, then the largest performance index of the new polynomial is required to be less than that of the current best polynomial. If they are not, the number of indices which are less than
$8$ times the
machine precision must not be smaller than for the current best polynomial. When the iterative process is terminated, it is the polynomial then recorded as best, which is returned to you as
$q\left(x\right)$.
On successful exit, and also if
${\mathbf{ifail}}={\mathbf{4}}$ or
${\mathbf{5}}$ on exit, the following information is contained in the workspace arrays
wrk and
iwrk:
${\mathbf{wrk}}\left(\mathit{k}+1\right)$, for $\mathit{k}=0,1,\dots ,\mathit{ipmax}$ where $\mathit{ipmax}={\displaystyle \underset{i}{\mathrm{max}}}\phantom{\rule{0.25em}{0ex}}{p}_{i}$, contains the ratio of ${p}_{k}$, the performance index relating to the $k$th derivative of the $q\left(x\right)$ finally provided, to $8$ times the machine precision.
${\mathbf{wrk}}\left(\mathit{ipmax}+1+\mathit{j}\right)$, for
$\mathit{j}=1,2,\dots ,n$, contains the
$j$th residual, i.e., the value of
${y}_{i}^{\left(k\right)}{q}^{\left(k\right)}\left({x}_{i}\right)$, where
$i$ and
$k$ are the appropriate values corresponding to the
$j$th element in the array
y (see the description of
y in
Section 5).
${\mathbf{iwrk}}\left(1\right)$ contains the number of iterations actually performed in deriving $q\left(x\right)$.
If, on exit, ${\mathbf{ifail}}={\mathbf{4}}$ or ${\mathbf{5}}$, the $q\left(x\right)$ finally provided may still be adequate for your requirements. To assess this you should examine the residuals contained in ${\mathbf{wrk}}\left(\mathit{ipmax}+1+\mathit{j}\right)$, for $\mathit{j}=1,2,\dots ,n$, to see whether they are acceptably small.
10
Example
This example constructs an interpolant
$q\left(x\right)$ to the following data:
The coefficients in the Chebyshev series representation of
$q\left(x\right)$ are printed, and also the residuals corresponding to each of the given function and derivative values.
This program is written in a generalized form which can read any number of datasets.
10.1
Program Text
Program Text (e01aefe.f90)
10.2
Program Data
Program Data (e01aefe.d)
10.3
Program Results
Program Results (e01aefe.r)