e02caf forms an approximation to the weighted, least squares Chebyshev series surface fit to data arbitrarily distributed on lines parallel to one independent coordinate axis.
The routine may be called by the names e02caf or nagf_fit_dim2_cheb_lines.
3Description
e02caf determines a bivariate polynomial approximation of degree $k$ in $x$ and $l$ in $y$ to the set of data points $({x}_{\mathit{r},\mathit{s}},{y}_{\mathit{s}},{f}_{\mathit{r},\mathit{s}})$, with weights ${w}_{\mathit{r},\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$ and $\mathit{r}=1,2,\dots ,{m}_{\mathit{s}}$. That is, the data points are on lines $y={y}_{s}$, but the $x$ values may be different on each line. The values of $k$ and $l$ are prescribed by you (for guidance on their choice, see Section 9). The subroutine is based on the method described in Sections 5 and 6 of Clenshaw and Hayes (1965).
The polynomial is represented in double Chebyshev series form with arguments $\overline{x}$ and $\overline{y}$. The arguments lie in the range $\mathrm{-1}$ to $+1$ and are related to the original variables $x$ and $y$ by the transformations
$$\overline{x}=\frac{2x-({x}_{\mathrm{max}}+{x}_{\mathrm{min}})}{({x}_{\mathrm{max}}-{x}_{\mathrm{min}})}\text{\hspace{1em} and \hspace{1em}}\overline{y}=\frac{2y-({y}_{\mathrm{max}}+{y}_{\mathrm{min}})}{({y}_{\mathrm{max}}-{y}_{\mathrm{min}})}\text{.}$$
Here ${y}_{\mathrm{max}}$ and ${y}_{\mathrm{min}}$ are set by the subroutine to, respectively, the largest and smallest value of ${y}_{s}$, but ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ are functions of $y$ prescribed by you (see Section 9). For this subroutine, only their values ${x}_{\mathrm{max}}^{\left(s\right)}$ and ${x}_{\mathrm{min}}^{\left(s\right)}$ at each $y={y}_{s}$ are required. For each $s=1,2,\dots ,n$, ${x}_{\mathrm{max}}^{\left(s\right)}$ must not be less than the largest ${x}_{r,s}$ on the line $y={y}_{s}$, and, similarly, ${x}_{\mathrm{min}}^{\left(s\right)}$ must not be greater than the smallest ${x}_{r,s}$.
where ${T}_{i}\left(\overline{x}\right)$ is the Chebyshev polynomial of the first kind of degree $i$ with argument $\overline{x}$, and ${T}_{j}\left(y\right)$ is similarly defined. However, the standard convention, followed in this subroutine, is that coefficients in the above expression which have either $i$ or $j$ zero are written as $\frac{1}{2}{a}_{ij}$, instead of simply ${a}_{ij}$, and the coefficient with both $i$ and $j$ equal to zero is written as $\frac{1}{4}{a}_{0,0}$. The series with coefficients output by the subroutine should be summed using this convention. e02cbf is available to compute values of the fitted function from these coefficients.
The subroutine first obtains Chebyshev series coefficients ${c}_{s,\mathit{i}}$, for $\mathit{i}=0,1,\dots ,k$, of the weighted least squares polynomial curve fit of degree $k$ in $\overline{x}$ to the data on each line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$, in turn, using an auxiliary subroutine. The same subroutine is then called $k+1$ times to fit ${c}_{\mathit{s},i}$, for $\mathit{s}=1,2,\dots ,n$, by a polynomial of degree $l$ in $\overline{y}$, for each $i=0,1,\dots ,k$. The resulting coefficients are the required ${a}_{ij}$.
You can force the fit to contain a given polynomial factor. This allows for the surface fit to be constrained to have specified values and derivatives along the boundaries $x={x}_{\mathrm{min}}$, $x={x}_{\mathrm{max}}$, $y={y}_{\mathrm{min}}$ and $y={y}_{\mathrm{max}}$ or indeed along any lines $\overline{x}=\text{}$ constant or $\overline{y}=\text{}$ constant (see Section 8 of Clenshaw and Hayes (1965)).
4References
Clenshaw C W and Hayes J G (1965) Curve and surface fitting J. Inst. Math. Appl.1 164–183
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London
On entry: ${\mathbf{m}}\left(\mathit{s}\right)$ must be set to ${m}_{\mathit{s}}$, the number of data $x$ values on the line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$.
Constraint:
${\mathbf{m}}\left(\mathit{s}\right)>0$, for $\mathit{s}=1,2,\dots ,{\mathbf{n}}$.
2: $\mathbf{n}$ – IntegerInput
On entry: the number of lines $y=\text{}$ constant on which data points are given.
Constraint:
${\mathbf{n}}>0$.
3: $\mathbf{k}$ – IntegerInput
On entry: $k$, the required degree of $x$ in the fit.
Constraint:
for $s=1,2,\dots ,n$, ${\mathbf{inuxp1}}-1\le {\mathbf{k}}<\mathit{mdist}\left(s\right)+{\mathbf{inuxp1}}-1$, where $\mathit{mdist}\left(s\right)$ is the number of distinct $x$ values with nonzero weight on the line $y={y}_{s}$. See Section 9.
4: $\mathbf{l}$ – IntegerInput
On entry: $l$, the required degree of $y$ in the fit.
5: $\mathbf{x}\left({\mathbf{mtot}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: the $x$ values of the data points. The sequence must be
all points on $y={y}_{1}$, followed by
all points on $y={y}_{2}$, followed by
$\vdots $
all points on $y={y}_{n}$.
Constraint:
for each ${y}_{s}$, the $x$ values must be in nondecreasing order.
6: $\mathbf{y}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: ${\mathbf{y}}\left(\mathit{s}\right)$ must contain the $y$ value of line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$, on which data is given.
Constraint:
the ${y}_{s}$ values must be in strictly increasing order.
7: $\mathbf{f}\left({\mathbf{mtot}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: $f$, the data values of the dependent variable in the same sequence as the $x$ values.
8: $\mathbf{w}\left({\mathbf{mtot}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: the weights to be assigned to the data points, in the same sequence as the $x$ values. These weights should be calculated from estimates of the absolute accuracies of the ${f}_{r}$, expressed as standard deviations, probable errors or some other measure which is of the same dimensions as ${f}_{r}$. Specifically, each ${w}_{r}$ should be inversely proportional to the accuracy estimate of ${f}_{r}$. Often weights all equal to unity will be satisfactory. If a particular weight is zero, the corresponding data point is omitted from the fit.
9: $\mathbf{mtot}$ – IntegerInput
On entry: the dimension of the arrays x, f and w as declared in the (sub)program from which e02caf is called.
10: $\mathbf{a}\left({\mathbf{na}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: contains the Chebyshev coefficients of the fit. ${\mathbf{a}}\left(i\times ({\mathbf{l}}+1)+j\right)$ is the coefficient ${a}_{ij}$ of Section 3 defined according to the standard convention. These coefficients are used by e02cbf to calculate values of the fitted function.
11: $\mathbf{na}$ – IntegerInput
On entry: the dimension of the array a as declared in the (sub)program from which e02caf is called.
Constraint:
${\mathbf{na}}\ge ({\mathbf{k}}+1)\times ({\mathbf{l}}+1)$, the total number of coefficients in the fit.
12: $\mathbf{xmin}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: ${\mathbf{xmin}}\left(\mathit{s}\right)$ must contain ${x}_{\mathrm{min}}^{\left(\mathit{s}\right)}$, the lower end of the range of $x$ on the line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$. It must not be greater than the lowest data value of $x$ on the line. Each ${x}_{\mathrm{min}}^{\left(s\right)}$ is scaled to $-1.0$ in the fit. (See also Section 9.)
13: $\mathbf{xmax}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: ${\mathbf{xmax}}\left(\mathit{s}\right)$ must contain ${x}_{\mathrm{max}}^{\left(\mathit{s}\right)}$, the upper end of the range of $x$ on the line $y={y}_{\mathit{s}}$, for $\mathit{s}=1,2,\dots ,n$. It must not be less than the highest data value of $x$ on the line. Each ${x}_{\mathrm{max}}^{\left(s\right)}$ is scaled to $+1.0$ in the fit. (See also Section 9.)
14: $\mathbf{nux}\left({\mathbf{inuxp1}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: ${\mathbf{nux}}\left(\mathit{i}\right)$ must contain the coefficient of the Chebyshev polynomial of degree $(\mathit{i}-1)$ in $\overline{x}$, in the Chebyshev series representation of the polynomial factor in $\overline{x}$ which you require the fit to contain, for $\mathit{i}=1,2,\dots ,{\mathbf{inuxp1}}$. These coefficients are defined according to the standard convention of Section 3.
Constraint:
${\mathbf{nux}}\left({\mathbf{inuxp1}}\right)$ must be nonzero, unless ${\mathbf{inuxp1}}=1$, in which case nux is ignored.
15: $\mathbf{inuxp1}$ – IntegerInput
On entry: $\mathit{INUX}+1$, where $\mathit{INUX}$ is the degree of a polynomial factor in $\overline{x}$ which you require the fit to contain. (See Section 3, last paragraph.)
If this option is not required, inuxp1 should be set equal to $1$.
16: $\mathbf{nuy}\left({\mathbf{inuyp1}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: ${\mathbf{nuy}}\left(\mathit{i}\right)$ must contain the coefficient of the Chebyshev polynomial of degree $(\mathit{i}-1)$ in $\overline{y}$, in the Chebyshev series representation of the polynomial factor which you require the fit to contain, for $\mathit{i}=1,2,\dots ,{\mathbf{inuyp1}}$. These coefficients are defined according to the standard convention of Section 3.
Constraint:
${\mathbf{nuy}}\left({\mathbf{inuyp1}}\right)$ must be nonzero, unless ${\mathbf{inuyp1}}=1$, in which case nuy is ignored.
17: $\mathbf{inuyp1}$ – IntegerInput
On entry: $\mathit{INUY}+1$, where $\mathit{INUY}$ is the degree of a polynomial factor in $\overline{y}$ which you require the fit to contain. (See Section 3, last paragraph.) If this option is not required, inuyp1 should be set equal to $1$.
18: $\mathbf{work}\left({\mathbf{nwork}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
19: $\mathbf{nwork}$ – IntegerInput
On entry: the dimension of the array work as declared in the (sub)program from which e02caf is called.
On entry: ifail must be set to $0$, $\mathrm{-1}$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $\mathrm{-1}$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $\mathrm{-1}$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $\mathrm{-1}$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, $\mathit{I}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{m}}\left(\mathit{I}\right)=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{k}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{inuxp1}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{m}}\left(\mathit{I}\right)\ge {\mathbf{k}}-{\mathbf{inuxp1}}+2$.
On entry, ${\mathbf{inuxp1}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{inuxp1}}\ge 1$.
On entry, ${\mathbf{inuxp1}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{k}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{inuxp1}}\le {\mathbf{k}}+1$.
On entry, ${\mathbf{inuyp1}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{inuyp1}}\ge 1$.
On entry, ${\mathbf{inuyp1}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{l}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{inuyp1}}\le {\mathbf{l}}+1$.
On entry, ${\mathbf{k}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{k}}\ge 0$.
On entry, ${\mathbf{l}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{l}}\ge 0$.
On entry, mtot is too small. ${\mathbf{mtot}}=\u27e8\mathit{\text{value}}\u27e9$. Minimum possible dimension: $\u27e8\mathit{\text{value}}\u27e9$.
On entry, ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{l}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{inuyp1}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{n}}\ge {\mathbf{l}}-{\mathbf{inuyp1}}+2$.
On entry, na is too small. ${\mathbf{na}}=\u27e8\mathit{\text{value}}\u27e9$. Minimum possible dimension: $\u27e8\mathit{\text{value}}\u27e9$.
On entry, nwork is too small. ${\mathbf{nwork}}=\u27e8\mathit{\text{value}}\u27e9$. Minimum possible dimension: $\u27e8\mathit{\text{value}}\u27e9$.
${\mathbf{ifail}}=2$
On entry, ${\mathbf{xmin}}\left(\mathit{I}\right)$ and ${\mathbf{xmax}}\left(\mathit{I}\right)$ do not span the data x values on ${\mathbf{y}}={\mathbf{y}}\left(\mathit{I}\right)$: $\mathit{I}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{xmin}}\left(\mathit{I}\right)=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{xmax}}\left(\mathit{I}\right)=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{y}}\left(\mathit{I}\right)=\u27e8\mathit{\text{value}}\u27e9$.
${\mathbf{ifail}}=3$
On entry, $\mathit{I}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{y}}\left(\mathit{I}\right)=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{y}}\left(\mathit{I}-1\right)=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{y}}\left(\mathit{I}\right)>{\mathbf{y}}\left(\mathit{I}-1\right)$.
On entry, the data x values are not nondecreasing for ${\mathbf{y}}={\mathbf{y}}\left(\mathit{I}\right)$: $\mathit{I}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{y}}\left(\mathit{I}\right)=\u27e8\mathit{\text{value}}\u27e9$.
${\mathbf{ifail}}=4$
On entry, the number of distinct x values with nonzero weight on ${\mathbf{y}}={\mathbf{y}}\left(\mathit{I}\right)$ is less than ${\mathbf{k}}-{\mathbf{inuxp1}}+2$: $\mathit{I}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{y}}\left(\mathit{I}\right)=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{k}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{inuxp1}}=\u27e8\mathit{\text{value}}\u27e9$.
${\mathbf{ifail}}=5$
On entry, ${\mathbf{inuxp1}}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{nux}}\left({\mathbf{inuxp1}}\right)=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{inuyp1}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{nuy}}\left({\mathbf{inuyp1}}\right)=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: if ${\mathbf{nux}}\left({\mathbf{inuxp1}}\right)=0.0$, ${\mathbf{inuxp1}}=1$; if ${\mathbf{nuy}}\left({\mathbf{inuyp1}}\right)=0.0$, ${\mathbf{inuyp1}}=1$.
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please
contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
No error analysis for this method has been published. Practical experience with the method, however, is generally extremely satisfactory.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
e02caf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9Further Comments
The time taken is approximately proportional to $k\times (k\times {\mathbf{mtot}}+n\times {l}^{2})$.
The reason for allowing ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ (which are used to normalize the range of $x$) to vary with $y$ is that unsatisfactory fits can result if the highest (or lowest) data values of the normalized $x$ on each line $y={y}_{s}$ are not approximately the same. (For an explanation of this phenomenon, see page 176 of Clenshaw and Hayes (1965).) Commonly in practice, the lowest (for example) data values ${x}_{1,s}$, while not being approximately constant, do lie close to some smooth curve in the $(x,y)$ plane. Using values from this curve as the values of ${x}_{\mathrm{min}}$, different in general on each line, causes the lowest transformed data values ${\overline{x}}_{1,s}$ to be approximately constant. Sometimes, appropriate curves for ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ will be clear from the context of the problem (they need not be polynomials). If this is not the case, suitable curves can often be obtained by fitting to the lowest data values ${x}_{1,s}$ and to the corresponding highest data values of $x$, low degree polynomials in $y$, using routine e02adf, and then shifting the two curves outwards by a small amount so that they just contain all the data between them. The complete curves are not in fact supplied to the present subroutine, only their values at each ${y}_{s}$; and the values simply need to lie on smooth curves. More values on the complete curves will be required subsequently, when computing values of the fitted surface at arbitrary $y$ values.
Naturally, a satisfactory approximation to the surface underlying the data cannot be expected if the character of the surface is not adequately represented by the data. Also, as always with polynomials, the approximating function may exhibit unwanted oscillations (particularly near the ends of the ranges) if the degrees $k$ and $l$ are taken greater than certain values, generally unknown but depending on the total number of coefficients $(k+1)\times (l+1)$ should be significantly smaller than, say not more than half, the total number of data points. Similarly, $k+1$ should be significantly smaller than most (preferably all) the ${m}_{s}$, and $l+1$ significantly smaller than $n$. Closer spacing of the data near the ends of the $x$ and $y$ ranges is an advantage. In particular, if ${\overline{y}}_{\mathit{s}}=-\mathrm{cos}(\pi (\mathit{s}-1)/(n-1))$, for $\mathit{s}=1,2,\dots ,n$ and ${\overline{x}}_{\mathit{r},s}=-\mathrm{cos}(\pi (\mathit{r}-1)/(m-1))$, for $\mathit{r}=1,2,\dots ,m$, (thus ${m}_{s}=m$ for all $s$), then the values $k=m-1$ and $l=n-1$ (so that the polynomial passes exactly through all the data points) should not give unwanted oscillations. Other datasets should be similarly satisfactory if they are everywhere at least as closely spaced as the above cosine values with $m$ replaced by $k+1$ and $n\times l+1$ (more precisely, if for every $s$ the largest interval between consecutive values of $\mathrm{arccos}{\overline{x}}_{\mathit{r},s}$, for $\mathit{r}=1,2,\dots ,m$, is not greater than $\pi /k$, and similarly for the ${\overline{y}}_{s}$). The polynomial obtained should always be examined graphically before acceptance. Note that, for this purpose it is not sufficient to plot the polynomial only at the data values of $x$ and $y$: intermediate values should also be plotted, preferably via a graphics facility.
Provided the data are adequate, and the surface underlying the data is of a form that can be represented by a polynomial of the chosen degrees, the subroutine should produce a good approximation to this surface. It is not, however, the true least squares surface fit nor even a polynomial in $x$ and $y$, the original variables (see Section 6 of Clenshaw and Hayes (1965), ), except in certain special cases. The most important of these is where the data values of $x$ are the same on each line $y={y}_{s}$, (i.e., the data points lie on a rectangular mesh in the $(x,y)$ plane), the weights of the data points are all equal, and ${x}_{\mathrm{max}}$ and ${x}_{\mathrm{min}}$ are both constants (in this case they should be set to the largest and smallest data values of $x$, respectively).
If the dataset is such that it can be satisfactorily approximated by a polynomial of degrees ${k}^{\prime}$ and ${l}^{\prime}$, say, then if higher values are used for $k$ and $l$ in the subroutine, all the coefficients ${a}_{ij}$ for $i>{k}^{\prime}$ or $j>{l}^{\prime}$ will take apparently random values within a range bounded by the size of the data errors, or rather less. (This behaviour of the Chebyshev coefficients, most readily observed if they are set out in a rectangular array, closely parallels that in curve-fitting, examples of which are given in Section 8 of Hayes (1970).) In practice, therefore, to establish suitable values of ${k}^{\prime}$ and ${l}^{\prime}$, you should first be seeking (within the limitations discussed above) values for $k$ and $l$ which are large enough to exhibit the behaviour described. Values for ${k}^{\prime}$ and ${l}^{\prime}$ should then be chosen as the smallest which do not exclude any coefficients significantly larger than the random ones. A polynomial of degrees ${k}^{\prime}$ and ${l}^{\prime}$ should then be fitted to the data.
If the option to force the fit to contain a given polynomial factor in $x$ is used and if zeros of the chosen factor coincide with data $x$ values on any line, then the effective number of data points on that line is reduced by the number of such coincidences. A similar consideration applies when forcing the $y$-direction. No account is taken of this by the subroutine when testing that the degrees $k$ and $l$ have not been chosen too large.
10Example
This example reads data in the following order, using the notation of the argument list for e02caf above: